Jan 21 18:14:02 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 18:14:03 crc kubenswrapper[5099]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.350405 5099 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353822 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353850 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353855 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353861 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353868 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353875 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353880 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353885 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353889 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353894 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353898 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353902 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353907 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353912 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353916 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353921 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353925 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353929 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353933 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353937 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353941 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353945 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353949 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353953 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353958 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353962 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353967 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353982 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353988 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353994 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.353999 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354005 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354010 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354015 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354019 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354023 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354028 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354032 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354038 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354042 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354046 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354050 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354054 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354059 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354063 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354067 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354071 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354079 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354124 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354128 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354133 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354138 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354142 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354147 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354151 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354155 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354159 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354163 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354168 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354174 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354178 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354182 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354187 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354191 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354196 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354200 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354204 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354208 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354217 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354221 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354226 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354230 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354234 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354238 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354243 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354248 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354252 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354256 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354260 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354264 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354268 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354272 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354278 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354283 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354287 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354292 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354915 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354924 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354928 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354934 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354940 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354945 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354950 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354954 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354958 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354963 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354966 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354970 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354974 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354978 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354985 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354989 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354993 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.354998 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355003 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355008 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355013 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355017 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355021 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355025 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355029 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355033 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355037 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355041 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355045 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355049 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355053 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355058 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355062 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355067 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355071 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355075 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355080 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355084 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355087 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355091 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355096 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355100 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355104 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355107 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355111 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355115 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355122 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355127 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355131 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355136 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355141 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355145 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355150 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355155 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355159 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355163 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355167 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355172 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355176 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355179 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355185 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355189 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355193 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355198 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355201 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355205 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355210 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355214 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355218 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355222 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355226 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355230 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355234 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355238 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355242 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355247 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355251 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355256 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355263 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355268 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355273 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355277 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355281 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355285 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355289 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.355293 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355398 5099 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355410 5099 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355418 5099 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355425 5099 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355431 5099 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355436 5099 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355443 5099 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355449 5099 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355455 5099 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355461 5099 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355466 5099 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355471 5099 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355477 5099 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355482 5099 flags.go:64] FLAG: --cgroup-root="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355486 5099 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355491 5099 flags.go:64] FLAG: --client-ca-file="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355496 5099 flags.go:64] FLAG: --cloud-config="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355500 5099 flags.go:64] FLAG: --cloud-provider="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355504 5099 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355513 5099 flags.go:64] FLAG: --cluster-domain="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355517 5099 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355522 5099 flags.go:64] FLAG: --config-dir="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355527 5099 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355532 5099 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355541 5099 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355546 5099 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355550 5099 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355555 5099 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355560 5099 flags.go:64] FLAG: --contention-profiling="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355565 5099 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355569 5099 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355574 5099 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355579 5099 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355587 5099 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355591 5099 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355596 5099 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355601 5099 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355605 5099 flags.go:64] FLAG: --enable-server="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355610 5099 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355616 5099 flags.go:64] FLAG: --event-burst="100" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355621 5099 flags.go:64] FLAG: --event-qps="50" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355626 5099 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355631 5099 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355635 5099 flags.go:64] FLAG: --eviction-hard="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355641 5099 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355646 5099 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355651 5099 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355656 5099 flags.go:64] FLAG: --eviction-soft="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355660 5099 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355665 5099 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355671 5099 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355676 5099 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355681 5099 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355685 5099 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355691 5099 flags.go:64] FLAG: --feature-gates="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355697 5099 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355705 5099 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355710 5099 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355714 5099 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355719 5099 flags.go:64] FLAG: --healthz-port="10248" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355724 5099 flags.go:64] FLAG: --help="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355729 5099 flags.go:64] FLAG: --hostname-override="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355737 5099 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355742 5099 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355761 5099 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355766 5099 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355772 5099 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355776 5099 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355781 5099 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355786 5099 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355791 5099 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355796 5099 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355803 5099 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355808 5099 flags.go:64] FLAG: --kube-reserved="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355813 5099 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355817 5099 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355822 5099 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355827 5099 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355831 5099 flags.go:64] FLAG: --lock-file="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355837 5099 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355842 5099 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355847 5099 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355855 5099 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355860 5099 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355864 5099 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355869 5099 flags.go:64] FLAG: --logging-format="text" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355874 5099 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355879 5099 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355886 5099 flags.go:64] FLAG: --manifest-url="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355890 5099 flags.go:64] FLAG: --manifest-url-header="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355897 5099 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355902 5099 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355908 5099 flags.go:64] FLAG: --max-pods="110" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355913 5099 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355918 5099 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355923 5099 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355928 5099 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355933 5099 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355938 5099 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355943 5099 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355956 5099 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355961 5099 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355965 5099 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355970 5099 flags.go:64] FLAG: --pod-cidr="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355977 5099 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355986 5099 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355991 5099 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.355996 5099 flags.go:64] FLAG: --pods-per-core="0" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356001 5099 flags.go:64] FLAG: --port="10250" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356005 5099 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356010 5099 flags.go:64] FLAG: --provider-id="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356015 5099 flags.go:64] FLAG: --qos-reserved="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356020 5099 flags.go:64] FLAG: --read-only-port="10255" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356024 5099 flags.go:64] FLAG: --register-node="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356029 5099 flags.go:64] FLAG: --register-schedulable="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356034 5099 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356042 5099 flags.go:64] FLAG: --registry-burst="10" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356047 5099 flags.go:64] FLAG: --registry-qps="5" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356051 5099 flags.go:64] FLAG: --reserved-cpus="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356055 5099 flags.go:64] FLAG: --reserved-memory="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356067 5099 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356072 5099 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356077 5099 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356082 5099 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356086 5099 flags.go:64] FLAG: --runonce="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356091 5099 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356096 5099 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356101 5099 flags.go:64] FLAG: --seccomp-default="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356106 5099 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356110 5099 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356115 5099 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356120 5099 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356131 5099 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356136 5099 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356141 5099 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356145 5099 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356150 5099 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356158 5099 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356163 5099 flags.go:64] FLAG: --system-cgroups="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356168 5099 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356176 5099 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356181 5099 flags.go:64] FLAG: --tls-cert-file="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356186 5099 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356193 5099 flags.go:64] FLAG: --tls-min-version="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356197 5099 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356202 5099 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356207 5099 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356212 5099 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356216 5099 flags.go:64] FLAG: --v="2" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356223 5099 flags.go:64] FLAG: --version="false" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356229 5099 flags.go:64] FLAG: --vmodule="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356235 5099 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356242 5099 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356353 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356360 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356364 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356371 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356376 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356381 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356385 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356390 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356395 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356399 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356404 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356408 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356414 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356418 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356423 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356427 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356434 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356439 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356444 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356448 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356453 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356457 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356462 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356467 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356471 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356475 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356479 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356483 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356487 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356492 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356496 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356500 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356504 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356508 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356513 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356517 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356521 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356525 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356529 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356533 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356537 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356541 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356545 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356549 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356552 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356557 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356562 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356566 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356572 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356576 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356580 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356585 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356590 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356594 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356600 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356605 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356610 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356614 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356618 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356622 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356626 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356629 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356633 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356637 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356642 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356646 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356650 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356654 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356658 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356663 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356667 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356671 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356675 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356679 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356683 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356687 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356691 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356696 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356701 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356705 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356711 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356715 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356719 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356724 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356728 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.356736 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.356931 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.401432 5099 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.401475 5099 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401543 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401552 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401557 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401562 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401567 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401571 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401575 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401579 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401583 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401588 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401593 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401597 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401602 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401606 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401610 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401614 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401618 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401623 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401627 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401631 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401635 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401639 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401643 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401648 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401652 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401656 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401660 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401664 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401668 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401673 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401677 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401683 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401687 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401693 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401697 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401702 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401706 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401711 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401715 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401720 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401725 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401730 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401738 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401742 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401768 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401773 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401777 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401781 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401785 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401790 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401794 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401798 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401802 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401806 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401810 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401814 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401818 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401822 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401825 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401829 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401833 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401837 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401841 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401845 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401850 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401854 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401859 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401863 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401867 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401873 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401881 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401887 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401892 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401897 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401901 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401906 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401910 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401914 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401918 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401923 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401927 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401932 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401936 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401941 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401945 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.401949 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.401956 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402106 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402115 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402119 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402124 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402128 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402133 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402136 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402140 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402144 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402148 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402153 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402158 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402163 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402167 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402171 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402175 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402179 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402183 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402188 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402195 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402199 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402204 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402208 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402212 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402216 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402220 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402225 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402229 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402233 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402238 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402242 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402246 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402251 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402255 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402259 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402263 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402267 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402271 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402275 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402278 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402282 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402286 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402291 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402297 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402301 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402305 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402311 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402317 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402322 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402326 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402331 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402335 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402339 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402344 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402348 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402352 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402356 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402360 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402366 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402370 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402374 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402378 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402383 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402387 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402391 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402395 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402399 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402403 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402407 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402411 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402415 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402419 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402423 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402427 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402432 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402436 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402442 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402446 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402452 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402456 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402460 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402465 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402469 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402473 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402478 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 18:14:03 crc kubenswrapper[5099]: W0121 18:14:03.402482 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.402489 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.402927 5099 server.go:962] "Client rotation is on, will bootstrap in background" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.406497 5099 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.409321 5099 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.409418 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.409913 5099 server.go:1019] "Starting client certificate rotation" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.410099 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.410198 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.427270 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.427968 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.431962 5099 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.476174 5099 log.go:25] "Validated CRI v1 runtime API" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.537901 5099 log.go:25] "Validated CRI v1 image API" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.539670 5099 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.546091 5099 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-21-18-07-53-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.546122 5099 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.561324 5099 manager.go:217] Machine: {Timestamp:2026-01-21 18:14:03.55846819 +0000 UTC m=+0.972430681 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:75e1b546-23f6-45fa-956a-1002c3d2f9b5 BootID:78a4762e-39b8-4942-bf29-6a84c0f689b6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:07:8f:f9 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:07:8f:f9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:00:77:a3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8c:f4:c7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:bb:62:ee Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5e:18:a1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:1c:81:95:da:2d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:e3:3f:07:c5:21 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.561550 5099 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.561710 5099 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.562540 5099 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.562583 5099 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.562806 5099 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.562818 5099 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.562843 5099 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.564609 5099 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565063 5099 state_mem.go:36] "Initialized new in-memory state store" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565257 5099 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565869 5099 kubelet.go:491] "Attempting to sync node with API server" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565899 5099 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565916 5099 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565931 5099 kubelet.go:397] "Adding apiserver pod source" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.565948 5099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.567494 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.567515 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.570393 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.570401 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.570423 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.570556 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.572102 5099 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.572783 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.573664 5099 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574037 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574068 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574078 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574086 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574094 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574103 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574111 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574119 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574129 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574145 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574163 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.574324 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.575534 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.575556 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.577498 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.585366 5099 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.585428 5099 server.go:1295] "Started kubelet" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.585589 5099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.585687 5099 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.585677 5099 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 18:14:03 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.586894 5099 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.587117 5099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.587167 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.588026 5099 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.588057 5099 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.588136 5099 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.588201 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589137 5099 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589156 5099 factory.go:55] Registering systemd factory Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589164 5099 factory.go:223] Registration of the systemd container factory successfully Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.589234 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.589462 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="200ms" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589708 5099 factory.go:153] Registering CRI-O factory Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589779 5099 factory.go:223] Registration of the crio container factory successfully Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589818 5099 factory.go:103] Registering Raw factory Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.589845 5099 manager.go:1196] Started watching for new ooms in manager Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.587969 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.61:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cd1a11ebcc551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,LastTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.591181 5099 manager.go:319] Starting recovery of all containers Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.591522 5099 server.go:317] "Adding debug handlers to kubelet server" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.615580 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616002 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616016 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616027 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616061 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616072 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616082 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616093 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616105 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616115 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616147 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616158 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616176 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616188 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616226 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616303 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616318 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616330 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616341 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616353 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616408 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616418 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616427 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616438 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616448 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616479 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616490 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616500 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616518 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616528 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616542 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616553 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616586 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616597 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616608 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616619 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616630 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616640 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616654 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616687 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616701 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616741 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616754 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616766 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616779 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616790 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616801 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616813 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616826 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616836 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616847 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616857 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616868 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616878 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616888 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616897 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616958 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616972 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616982 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.616994 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617005 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617015 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617026 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617035 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617045 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617057 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617069 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617080 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617091 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617102 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617112 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617121 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617131 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617140 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617152 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617161 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617174 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617186 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617196 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617207 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617217 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617227 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617238 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617249 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617260 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617270 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617281 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617292 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617303 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617345 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617359 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617371 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617385 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617398 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617409 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617424 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617436 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617450 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.617464 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618452 5099 manager.go:324] Recovery completed Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618666 5099 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618720 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618800 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618816 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618829 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618842 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618859 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618872 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618885 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618901 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618913 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618925 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618937 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618949 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618984 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.618998 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619011 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619024 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619035 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619047 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619061 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619075 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619097 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619107 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619119 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619133 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619144 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619155 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619167 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619180 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619192 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619205 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619264 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619276 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619290 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619302 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619313 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619324 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619335 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619347 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619357 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619370 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619394 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619405 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619418 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619429 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619440 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619451 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619463 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619475 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619486 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619497 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619513 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619524 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619535 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619545 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619556 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619568 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619580 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619591 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619603 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619615 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619629 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619641 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619652 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619663 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619674 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619685 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619696 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619705 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619716 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619731 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619814 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619829 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619841 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619852 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619864 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619875 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619910 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619923 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619935 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619948 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619959 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619972 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619984 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.619998 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620011 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620022 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620036 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620050 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620063 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620079 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620090 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620102 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620116 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620130 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620143 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620154 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620164 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620174 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620184 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620195 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620207 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620219 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620232 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620243 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620254 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620264 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620275 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620287 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620297 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620309 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620319 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620330 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620345 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620388 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620401 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620425 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620439 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620451 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620464 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620477 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620490 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620504 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620522 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620572 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620587 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620608 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620638 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620652 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620664 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620675 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620691 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620702 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620714 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620725 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620762 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620776 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620788 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620799 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620813 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620824 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620836 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620846 5099 reconstruct.go:97] "Volume reconstruction finished" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.620853 5099 reconciler.go:26] "Reconciler: start to sync state" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.632973 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.634862 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.634937 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.634951 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.635778 5099 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.635806 5099 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.635829 5099 state_mem.go:36] "Initialized new in-memory state store" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.688600 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.788765 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.792404 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="400ms" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.888976 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.910614 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.912377 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.912420 5099 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.912449 5099 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.912458 5099 kubelet.go:2451] "Starting kubelet main sync loop" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.912549 5099 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.914620 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.927481 5099 policy_none.go:49] "None policy: Start" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.927545 5099 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.927566 5099 state_mem.go:35] "Initializing new in-memory state store" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.987966 5099 manager.go:341] "Starting Device Plugin manager" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.988193 5099 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988212 5099 server.go:85] "Starting device plugin registration server" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988593 5099 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988612 5099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988854 5099 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988926 5099 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 18:14:03 crc kubenswrapper[5099]: I0121 18:14:03.988933 5099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.992669 5099 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 21 18:14:03 crc kubenswrapper[5099]: E0121 18:14:03.992724 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.012911 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.013378 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.014205 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.014256 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.014290 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.015268 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.015316 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.015785 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.015959 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.016002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.016014 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.016217 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.016240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.016255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.017634 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.017689 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.018280 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022151 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022181 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022152 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022193 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022209 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.022227 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.023046 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.023352 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.023385 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024155 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024173 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024184 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024159 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024276 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.024295 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025335 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025344 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025596 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025821 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025850 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025863 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.025981 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.026008 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.026020 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.026520 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.026557 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.026985 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.027018 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.027033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.047866 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.054876 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.070844 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.088879 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.089455 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.089505 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.089515 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.089537 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.089920 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.090666 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.096254 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.126708 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.126815 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.126942 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.126993 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127021 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127199 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127234 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127273 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127358 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127436 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127567 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127625 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127677 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127718 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127783 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127809 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127897 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.127980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128055 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128109 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128180 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128213 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128235 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128280 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128366 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128423 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128601 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.128621 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.129285 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.193700 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="800ms" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229433 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229617 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229642 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229662 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229677 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229720 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229757 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229771 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229783 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229797 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229803 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229852 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229810 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.229925 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230010 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230066 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230086 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230048 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230107 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230113 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230135 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230173 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230189 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230215 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230235 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230179 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230394 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230620 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230637 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.230650 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.290272 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.291492 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.291547 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.291556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.291580 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.292127 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.349081 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.356109 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.372041 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: W0121 18:14:04.383550 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-6220951c2d1cafbfee7dd3311ce76feeebbdaa596c0143d2aafbc6722bcdd51f WatchSource:0}: Error finding container 6220951c2d1cafbfee7dd3311ce76feeebbdaa596c0143d2aafbc6722bcdd51f: Status 404 returned error can't find the container with id 6220951c2d1cafbfee7dd3311ce76feeebbdaa596c0143d2aafbc6722bcdd51f Jan 21 18:14:04 crc kubenswrapper[5099]: W0121 18:14:04.385440 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-354081ee1679751f5051ecbd7dbba229a58a023213b89cb73eea4e39904e5fa6 WatchSource:0}: Error finding container 354081ee1679751f5051ecbd7dbba229a58a023213b89cb73eea4e39904e5fa6: Status 404 returned error can't find the container with id 354081ee1679751f5051ecbd7dbba229a58a023213b89cb73eea4e39904e5fa6 Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.391188 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.391224 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.396814 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:04 crc kubenswrapper[5099]: W0121 18:14:04.400113 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-eb913fec870bab6e9193b9b5c668876738a0332b7e0d05fdd018861f7424f132 WatchSource:0}: Error finding container eb913fec870bab6e9193b9b5c668876738a0332b7e0d05fdd018861f7424f132: Status 404 returned error can't find the container with id eb913fec870bab6e9193b9b5c668876738a0332b7e0d05fdd018861f7424f132 Jan 21 18:14:04 crc kubenswrapper[5099]: W0121 18:14:04.410160 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-652af9d6729429184e12d9406ac0a00ac3123c8955742287b0eadd5bc542ec61 WatchSource:0}: Error finding container 652af9d6729429184e12d9406ac0a00ac3123c8955742287b0eadd5bc542ec61: Status 404 returned error can't find the container with id 652af9d6729429184e12d9406ac0a00ac3123c8955742287b0eadd5bc542ec61 Jan 21 18:14:04 crc kubenswrapper[5099]: W0121 18:14:04.413618 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-1f77132d01210ac4220c2805a3235d1ba5a7369fc665ee9a706870eb512e70bb WatchSource:0}: Error finding container 1f77132d01210ac4220c2805a3235d1ba5a7369fc665ee9a706870eb512e70bb: Status 404 returned error can't find the container with id 1f77132d01210ac4220c2805a3235d1ba5a7369fc665ee9a706870eb512e70bb Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.578831 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.589482 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.627787 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.693068 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.694413 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.694453 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.694462 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.694489 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.695038 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:04 crc kubenswrapper[5099]: E0121 18:14:04.750445 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.916833 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1f77132d01210ac4220c2805a3235d1ba5a7369fc665ee9a706870eb512e70bb"} Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.917713 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"652af9d6729429184e12d9406ac0a00ac3123c8955742287b0eadd5bc542ec61"} Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.918892 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eb913fec870bab6e9193b9b5c668876738a0332b7e0d05fdd018861f7424f132"} Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.919798 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"6220951c2d1cafbfee7dd3311ce76feeebbdaa596c0143d2aafbc6722bcdd51f"} Jan 21 18:14:04 crc kubenswrapper[5099]: I0121 18:14:04.920578 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"354081ee1679751f5051ecbd7dbba229a58a023213b89cb73eea4e39904e5fa6"} Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:04.995568 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="1.6s" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.007409 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.495788 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.497204 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.497255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.497275 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.497309 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.497852 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.520952 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.521976 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.578415 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.925687 5099 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95" exitCode=0 Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.925869 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95"} Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.925953 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.926621 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.926674 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.926687 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.926925 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.928258 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a"} Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.929892 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243" exitCode=0 Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.929941 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243"} Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.930086 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.930823 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.930849 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.930861 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.931007 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.931821 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832" exitCode=0 Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.931927 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.931973 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832"} Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.932791 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946195 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946252 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946265 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946195 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946425 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:05 crc kubenswrapper[5099]: I0121 18:14:05.946454 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:05 crc kubenswrapper[5099]: E0121 18:14:05.946860 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:06 crc kubenswrapper[5099]: I0121 18:14:06.578973 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:06 crc kubenswrapper[5099]: E0121 18:14:06.597064 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="3.2s" Jan 21 18:14:06 crc kubenswrapper[5099]: E0121 18:14:06.875941 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:07 crc kubenswrapper[5099]: E0121 18:14:07.066179 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.098321 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.099230 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.099264 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.099280 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.099306 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:07 crc kubenswrapper[5099]: E0121 18:14:07.099791 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.152259 5099 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe" exitCode=0 Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.152320 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe"} Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.152435 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.153247 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.153347 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.153369 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:07 crc kubenswrapper[5099]: E0121 18:14:07.153854 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:07 crc kubenswrapper[5099]: E0121 18:14:07.425031 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:07 crc kubenswrapper[5099]: I0121 18:14:07.581029 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:07 crc kubenswrapper[5099]: E0121 18:14:07.920506 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.170856 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.173946 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878" exitCode=0 Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.174023 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.174229 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.175041 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.175070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.175081 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:08 crc kubenswrapper[5099]: E0121 18:14:08.175279 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.178926 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.179013 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.179539 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.179562 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.179572 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:08 crc kubenswrapper[5099]: E0121 18:14:08.179714 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.198895 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.198974 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.202537 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.202603 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88"} Jan 21 18:14:08 crc kubenswrapper[5099]: I0121 18:14:08.578439 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.207233 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7"} Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.207302 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.207846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.207889 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.207899 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:09 crc kubenswrapper[5099]: E0121 18:14:09.208198 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.734539 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:09 crc kubenswrapper[5099]: I0121 18:14:09.764199 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 18:14:09 crc kubenswrapper[5099]: E0121 18:14:09.766234 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 18:14:09 crc kubenswrapper[5099]: E0121 18:14:09.798116 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="6.4s" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.055353 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.61:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cd1a11ebcc551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,LastTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.211578 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516"} Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.211658 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.212289 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.212311 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.212321 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.212503 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.214398 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907"} Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.216153 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a" exitCode=0 Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.216179 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a"} Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.216349 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.217459 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.217500 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.217517 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.217846 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.219244 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d"} Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.219410 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.219963 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.219993 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.220005 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.220200 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.300913 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.302459 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.302498 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.302509 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.302532 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.303131 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.484174 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:10 crc kubenswrapper[5099]: I0121 18:14:10.578042 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:10 crc kubenswrapper[5099]: E0121 18:14:10.716562 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.226773 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58"} Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.229570 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.229625 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.230019 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4"} Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.230147 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.230695 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.230726 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.230757 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:11 crc kubenswrapper[5099]: E0121 18:14:11.231048 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.231541 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.231564 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.231573 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:11 crc kubenswrapper[5099]: E0121 18:14:11.231813 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:11 crc kubenswrapper[5099]: E0121 18:14:11.496688 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:11 crc kubenswrapper[5099]: I0121 18:14:11.578872 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.239889 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d98b3ac9b8f7864e0e251f84b7166574ccc7613b3ef8a6094c6d5c24b8d6ca02"} Jan 21 18:14:12 crc kubenswrapper[5099]: E0121 18:14:12.355213 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.61:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.578625 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.835207 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.835606 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.837213 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.837273 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:12 crc kubenswrapper[5099]: I0121 18:14:12.837286 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:12 crc kubenswrapper[5099]: E0121 18:14:12.837689 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.074884 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.244294 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.244582 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7"} Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.244618 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.244714 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.245172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.245194 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.245203 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:13 crc kubenswrapper[5099]: E0121 18:14:13.245384 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.245949 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.246702 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.246776 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:13 crc kubenswrapper[5099]: E0121 18:14:13.247171 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.579568 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.650449 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.803650 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.803913 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.804619 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.804713 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.804763 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:13 crc kubenswrapper[5099]: I0121 18:14:13.804778 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:13 crc kubenswrapper[5099]: E0121 18:14:13.805063 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:13 crc kubenswrapper[5099]: E0121 18:14:13.992941 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.201700 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.252404 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628"} Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.252453 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3"} Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.252615 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.252674 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.252985 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253344 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253386 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253400 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253567 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253623 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.253637 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:14 crc kubenswrapper[5099]: E0121 18:14:14.253858 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:14 crc kubenswrapper[5099]: E0121 18:14:14.254160 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:14 crc kubenswrapper[5099]: I0121 18:14:14.578316 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.257622 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.258067 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.258269 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9"} Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.258384 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.259319 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.259352 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.259363 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:15 crc kubenswrapper[5099]: E0121 18:14:15.259562 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260147 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260157 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:15 crc kubenswrapper[5099]: E0121 18:14:15.260399 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260770 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260797 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.260809 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:15 crc kubenswrapper[5099]: E0121 18:14:15.261135 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.268082 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:15 crc kubenswrapper[5099]: I0121 18:14:15.578930 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:16 crc kubenswrapper[5099]: E0121 18:14:16.198841 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="7s" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.260178 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.260178 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261286 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261310 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261335 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261359 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261336 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.261446 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:16 crc kubenswrapper[5099]: E0121 18:14:16.261791 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:16 crc kubenswrapper[5099]: E0121 18:14:16.262029 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.579069 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.61:6443: connect: connection refused Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.650587 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.650706 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.703315 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.704344 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.704391 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.704409 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:16 crc kubenswrapper[5099]: I0121 18:14:16.704444 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:16 crc kubenswrapper[5099]: E0121 18:14:16.705014 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.61:6443: connect: connection refused" node="crc" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.263803 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.265768 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d98b3ac9b8f7864e0e251f84b7166574ccc7613b3ef8a6094c6d5c24b8d6ca02" exitCode=255 Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.265809 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d98b3ac9b8f7864e0e251f84b7166574ccc7613b3ef8a6094c6d5c24b8d6ca02"} Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.266080 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.266984 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.267021 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.267033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:17 crc kubenswrapper[5099]: E0121 18:14:17.267387 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:17 crc kubenswrapper[5099]: I0121 18:14:17.267795 5099 scope.go:117] "RemoveContainer" containerID="d98b3ac9b8f7864e0e251f84b7166574ccc7613b3ef8a6094c6d5c24b8d6ca02" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.004472 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.270414 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.272593 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6"} Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.272839 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.273489 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.273531 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.273544 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:18 crc kubenswrapper[5099]: E0121 18:14:18.273882 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:18 crc kubenswrapper[5099]: I0121 18:14:18.501006 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.275952 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.276043 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.277185 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.277232 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.277247 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:19 crc kubenswrapper[5099]: E0121 18:14:19.277729 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.370074 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.370587 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.371861 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.371956 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:19 crc kubenswrapper[5099]: I0121 18:14:19.371969 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:19 crc kubenswrapper[5099]: E0121 18:14:19.372578 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:20 crc kubenswrapper[5099]: I0121 18:14:20.278444 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:20 crc kubenswrapper[5099]: I0121 18:14:20.279244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:20 crc kubenswrapper[5099]: I0121 18:14:20.279321 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:20 crc kubenswrapper[5099]: I0121 18:14:20.279341 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:20 crc kubenswrapper[5099]: E0121 18:14:20.279973 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:21 crc kubenswrapper[5099]: I0121 18:14:21.009346 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 18:14:21 crc kubenswrapper[5099]: I0121 18:14:21.009716 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:21 crc kubenswrapper[5099]: I0121 18:14:21.010798 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:21 crc kubenswrapper[5099]: I0121 18:14:21.010838 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:21 crc kubenswrapper[5099]: I0121 18:14:21.010851 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:21 crc kubenswrapper[5099]: E0121 18:14:21.011346 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:23 crc kubenswrapper[5099]: I0121 18:14:23.705431 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:23 crc kubenswrapper[5099]: I0121 18:14:23.706502 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:23 crc kubenswrapper[5099]: I0121 18:14:23.706547 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:23 crc kubenswrapper[5099]: I0121 18:14:23.706556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:23 crc kubenswrapper[5099]: I0121 18:14:23.706584 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:23 crc kubenswrapper[5099]: E0121 18:14:23.993279 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:24 crc kubenswrapper[5099]: I0121 18:14:24.940599 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 18:14:24 crc kubenswrapper[5099]: I0121 18:14:24.940696 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 18:14:24 crc kubenswrapper[5099]: I0121 18:14:24.945680 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 18:14:24 crc kubenswrapper[5099]: I0121 18:14:24.945813 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 18:14:26 crc kubenswrapper[5099]: I0121 18:14:26.651685 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 21 18:14:26 crc kubenswrapper[5099]: I0121 18:14:26.651841 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.011320 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.011640 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.012026 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.012077 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.012675 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.012714 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.012745 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:28 crc kubenswrapper[5099]: E0121 18:14:28.013123 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.017193 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.298258 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.298622 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.298703 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.299068 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.299093 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:28 crc kubenswrapper[5099]: I0121 18:14:28.299102 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:28 crc kubenswrapper[5099]: E0121 18:14:28.299387 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.933594 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a11ebcc551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,LastTimestamp:2026-01-21 18:14:03.585389905 +0000 UTC m=+0.999352366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: I0121 18:14:29.933937 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.934056 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946180 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946245 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: I0121 18:14:29.946715 5099 trace.go:236] Trace[1152891325]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 18:14:19.555) (total time: 10390ms): Jan 21 18:14:29 crc kubenswrapper[5099]: Trace[1152891325]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 10390ms (18:14:29.946) Jan 21 18:14:29 crc kubenswrapper[5099]: Trace[1152891325]: [10.390693698s] [10.390693698s] END Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946756 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:29 crc kubenswrapper[5099]: I0121 18:14:29.946820 5099 trace.go:236] Trace[1839851897]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 18:14:19.097) (total time: 10849ms): Jan 21 18:14:29 crc kubenswrapper[5099]: Trace[1839851897]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 10849ms (18:14:29.946) Jan 21 18:14:29 crc kubenswrapper[5099]: Trace[1839851897]: [10.849658979s] [10.849658979s] END Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946831 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946881 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.946968 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.949902 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.952584 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.955021 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a136e69cc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.990785223 +0000 UTC m=+1.404747684,LastTimestamp:2026-01-21 18:14:03.990785223 +0000 UTC m=+1.404747684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.957209 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.014235744 +0000 UTC m=+1.428198205,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.960284 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.014263935 +0000 UTC m=+1.428226396,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.965347 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.014297806 +0000 UTC m=+1.428260267,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.970855 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.015985067 +0000 UTC m=+1.429947538,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: I0121 18:14:29.972878 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.976670 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.016008887 +0000 UTC m=+1.429971358,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.984099 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.016020278 +0000 UTC m=+1.429982749,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.992346 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.016232773 +0000 UTC m=+1.430195234,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:29 crc kubenswrapper[5099]: E0121 18:14:29.997024 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.016245703 +0000 UTC m=+1.430208164,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.013769 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.016264283 +0000 UTC m=+1.430226764,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.021834 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.022171215 +0000 UTC m=+1.436133676,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.039666 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.022187765 +0000 UTC m=+1.436150226,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.049777 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.022198645 +0000 UTC m=+1.436161106,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.054444 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.022197085 +0000 UTC m=+1.436159556,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.059179 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.022219196 +0000 UTC m=+1.436181667,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.064134 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.022234416 +0000 UTC m=+1.436196897,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.068794 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.024167402 +0000 UTC m=+1.438129863,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.075795 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.024179352 +0000 UTC m=+1.438141813,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.081646 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b11a53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b11a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634956883 +0000 UTC m=+1.048919344,LastTimestamp:2026-01-21 18:14:04.024188713 +0000 UTC m=+1.438151174,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.086955 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b07a5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b07a5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634915932 +0000 UTC m=+1.048878393,LastTimestamp:2026-01-21 18:14:04.024259484 +0000 UTC m=+1.438221965,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.093959 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cd1a121b0ebea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cd1a121b0ebea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:03.634945002 +0000 UTC m=+1.048907463,LastTimestamp:2026-01-21 18:14:04.024286745 +0000 UTC m=+1.438249216,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.100028 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a14ecbaa53 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:04.391672403 +0000 UTC m=+1.805634864,LastTimestamp:2026-01-21 18:14:04.391672403 +0000 UTC m=+1.805634864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.113481 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a14ed30042 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:04.392153154 +0000 UTC m=+1.806115635,LastTimestamp:2026-01-21 18:14:04.392153154 +0000 UTC m=+1.806115635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.123190 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a14f92c19a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:04.404720026 +0000 UTC m=+1.818682487,LastTimestamp:2026-01-21 18:14:04.404720026 +0000 UTC m=+1.818682487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.128628 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a150062825 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:04.412282917 +0000 UTC m=+1.826245388,LastTimestamp:2026-01-21 18:14:04.412282917 +0000 UTC m=+1.826245388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.134837 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a15283d9b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:04.454074808 +0000 UTC m=+1.868037269,LastTimestamp:2026-01-21 18:14:04.454074808 +0000 UTC m=+1.868037269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.144218 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a1765ed765 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.055629157 +0000 UTC m=+2.469591618,LastTimestamp:2026-01-21 18:14:05.055629157 +0000 UTC m=+2.469591618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.171178 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a176601a23 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.055711779 +0000 UTC m=+2.469674240,LastTimestamp:2026-01-21 18:14:05.055711779 +0000 UTC m=+2.469674240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.175636 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a17661fc88 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.055835272 +0000 UTC m=+2.469797733,LastTimestamp:2026-01-21 18:14:05.055835272 +0000 UTC m=+2.469797733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.181180 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a17662c119 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.055885593 +0000 UTC m=+2.469848054,LastTimestamp:2026-01-21 18:14:05.055885593 +0000 UTC m=+2.469848054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.186624 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a176676a18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.056191 +0000 UTC m=+2.470153461,LastTimestamp:2026-01-21 18:14:05.056191 +0000 UTC m=+2.470153461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.191048 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a1775456e1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.071718113 +0000 UTC m=+2.485680574,LastTimestamp:2026-01-21 18:14:05.071718113 +0000 UTC m=+2.485680574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.194991 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a1776b32f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.073216249 +0000 UTC m=+2.487178710,LastTimestamp:2026-01-21 18:14:05.073216249 +0000 UTC m=+2.487178710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.198792 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a177b6e111 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.078176017 +0000 UTC m=+2.492138468,LastTimestamp:2026-01-21 18:14:05.078176017 +0000 UTC m=+2.492138468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.203904 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a177b876c8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.07827988 +0000 UTC m=+2.492242341,LastTimestamp:2026-01-21 18:14:05.07827988 +0000 UTC m=+2.492242341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.207639 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a177ba060c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.078382092 +0000 UTC m=+2.492344553,LastTimestamp:2026-01-21 18:14:05.078382092 +0000 UTC m=+2.492344553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.211596 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a177ba590e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.078403342 +0000 UTC m=+2.492365803,LastTimestamp:2026-01-21 18:14:05.078403342 +0000 UTC m=+2.492365803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.215828 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a1aa5c116b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.927862635 +0000 UTC m=+3.341825096,LastTimestamp:2026-01-21 18:14:05.927862635 +0000 UTC m=+3.341825096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.220007 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a1aa97e8c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.931784389 +0000 UTC m=+3.345746850,LastTimestamp:2026-01-21 18:14:05.931784389 +0000 UTC m=+3.345746850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.224123 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a1ab91760d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:05.948139021 +0000 UTC m=+3.362101472,LastTimestamp:2026-01-21 18:14:05.948139021 +0000 UTC m=+3.362101472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.228149 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a1f3ac81e7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.157871079 +0000 UTC m=+4.571833550,LastTimestamp:2026-01-21 18:14:07.157871079 +0000 UTC m=+4.571833550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.232526 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a1f3ad632e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.15792875 +0000 UTC m=+4.571891221,LastTimestamp:2026-01-21 18:14:07.15792875 +0000 UTC m=+4.571891221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.236164 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a1f827c2b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.233057461 +0000 UTC m=+4.647019922,LastTimestamp:2026-01-21 18:14:07.233057461 +0000 UTC m=+4.647019922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.239587 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a1f88d5174 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.23971314 +0000 UTC m=+4.653675601,LastTimestamp:2026-01-21 18:14:07.23971314 +0000 UTC m=+4.653675601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.243178 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a20c49ad94 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.570824596 +0000 UTC m=+4.984787057,LastTimestamp:2026-01-21 18:14:07.570824596 +0000 UTC m=+4.984787057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.247200 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a20c78e14f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.573918031 +0000 UTC m=+4.987880492,LastTimestamp:2026-01-21 18:14:07.573918031 +0000 UTC m=+4.987880492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.251156 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a20c83f604 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.574644228 +0000 UTC m=+4.988606679,LastTimestamp:2026-01-21 18:14:07.574644228 +0000 UTC m=+4.988606679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.256383 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a20ceee21e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.581651486 +0000 UTC m=+4.995613947,LastTimestamp:2026-01-21 18:14:07.581651486 +0000 UTC m=+4.995613947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.261752 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a20d2edb08 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.585843976 +0000 UTC m=+4.999806437,LastTimestamp:2026-01-21 18:14:07.585843976 +0000 UTC m=+4.999806437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.266917 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a20e64f154 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.606165844 +0000 UTC m=+5.020128305,LastTimestamp:2026-01-21 18:14:07.606165844 +0000 UTC m=+5.020128305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.271114 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a20e72f6fc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.607084796 +0000 UTC m=+5.021047257,LastTimestamp:2026-01-21 18:14:07.607084796 +0000 UTC m=+5.021047257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.278006 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cd1a20f38dc8d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.620054157 +0000 UTC m=+5.034016618,LastTimestamp:2026-01-21 18:14:07.620054157 +0000 UTC m=+5.034016618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.282563 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a20f390f5f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.620067167 +0000 UTC m=+5.034029628,LastTimestamp:2026-01-21 18:14:07.620067167 +0000 UTC m=+5.034029628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.286526 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a20f9f66ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.626774187 +0000 UTC m=+5.040736648,LastTimestamp:2026-01-21 18:14:07.626774187 +0000 UTC m=+5.040736648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.288433 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a2239bc701 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.962081025 +0000 UTC m=+5.376043486,LastTimestamp:2026-01-21 18:14:07.962081025 +0000 UTC m=+5.376043486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.292968 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a224473004 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.973314564 +0000 UTC m=+5.387277045,LastTimestamp:2026-01-21 18:14:07.973314564 +0000 UTC m=+5.387277045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.305172 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a2245dfdee openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:07.97480907 +0000 UTC m=+5.388771851,LastTimestamp:2026-01-21 18:14:07.97480907 +0000 UTC m=+5.388771851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.311398 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a22dcfe7dc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.133269468 +0000 UTC m=+5.547231949,LastTimestamp:2026-01-21 18:14:08.133269468 +0000 UTC m=+5.547231949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.316441 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a22ea1b88e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.147019918 +0000 UTC m=+5.560982379,LastTimestamp:2026-01-21 18:14:08.147019918 +0000 UTC m=+5.560982379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.327499 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a22eb1ba45 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.148068933 +0000 UTC m=+5.562031394,LastTimestamp:2026-01-21 18:14:08.148068933 +0000 UTC m=+5.562031394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.332514 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a22ebd3297 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.148820631 +0000 UTC m=+5.562783092,LastTimestamp:2026-01-21 18:14:08.148820631 +0000 UTC m=+5.562783092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.339201 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a230655ebe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.176619198 +0000 UTC m=+5.590581659,LastTimestamp:2026-01-21 18:14:08.176619198 +0000 UTC m=+5.590581659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.344611 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a232df485e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.218163294 +0000 UTC m=+5.632125755,LastTimestamp:2026-01-21 18:14:08.218163294 +0000 UTC m=+5.632125755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.352174 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a23321a64e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:08.222512718 +0000 UTC m=+5.636475179,LastTimestamp:2026-01-21 18:14:08.222512718 +0000 UTC m=+5.636475179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.357224 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a2772748a3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.363732643 +0000 UTC m=+6.777695104,LastTimestamp:2026-01-21 18:14:09.363732643 +0000 UTC m=+6.777695104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.362398 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a278243d59 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.380310361 +0000 UTC m=+6.794272822,LastTimestamp:2026-01-21 18:14:09.380310361 +0000 UTC m=+6.794272822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.370871 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a2784e6b39 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.383074617 +0000 UTC m=+6.797037078,LastTimestamp:2026-01-21 18:14:09.383074617 +0000 UTC m=+6.797037078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.405996 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a27851c6b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.383294642 +0000 UTC m=+6.797257103,LastTimestamp:2026-01-21 18:14:09.383294642 +0000 UTC m=+6.797257103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.411317 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a27937ba33 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.398364723 +0000 UTC m=+6.812327204,LastTimestamp:2026-01-21 18:14:09.398364723 +0000 UTC m=+6.812327204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.418678 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a2796b7a40 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.401756224 +0000 UTC m=+6.815718685,LastTimestamp:2026-01-21 18:14:09.401756224 +0000 UTC m=+6.815718685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.427815 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2796c0943 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.401792835 +0000 UTC m=+6.815755296,LastTimestamp:2026-01-21 18:14:09.401792835 +0000 UTC m=+6.815755296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.436956 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a27985e698 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.403487896 +0000 UTC m=+6.817450367,LastTimestamp:2026-01-21 18:14:09.403487896 +0000 UTC m=+6.817450367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.446920 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cd1a27b19d87f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:09.429960831 +0000 UTC m=+6.843923292,LastTimestamp:2026-01-21 18:14:09.429960831 +0000 UTC m=+6.843923292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.454228 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a2aa263825 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:10.219300901 +0000 UTC m=+7.633263382,LastTimestamp:2026-01-21 18:14:10.219300901 +0000 UTC m=+7.633263382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.458919 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2c5b487fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:10.681612283 +0000 UTC m=+8.095574744,LastTimestamp:2026-01-21 18:14:10.681612283 +0000 UTC m=+8.095574744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.464582 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a2d930abad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.008514989 +0000 UTC m=+8.422477450,LastTimestamp:2026-01-21 18:14:11.008514989 +0000 UTC m=+8.422477450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.469062 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2d932de6d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.008659053 +0000 UTC m=+8.422621514,LastTimestamp:2026-01-21 18:14:11.008659053 +0000 UTC m=+8.422621514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.472920 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2d947309f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.009990815 +0000 UTC m=+8.423953276,LastTimestamp:2026-01-21 18:14:11.009990815 +0000 UTC m=+8.423953276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.478682 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a32042d152 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.20088661 +0000 UTC m=+9.614849061,LastTimestamp:2026-01-21 18:14:12.20088661 +0000 UTC m=+9.614849061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.484718 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a32052f20d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.201943565 +0000 UTC m=+9.615906026,LastTimestamp:2026-01-21 18:14:12.201943565 +0000 UTC m=+9.615906026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.488993 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a320557513 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.202108179 +0000 UTC m=+9.616070630,LastTimestamp:2026-01-21 18:14:12.202108179 +0000 UTC m=+9.616070630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.493294 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a322ce4342 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.243579714 +0000 UTC m=+9.657542175,LastTimestamp:2026-01-21 18:14:12.243579714 +0000 UTC m=+9.657542175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.497440 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a33ca0fba6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.676819878 +0000 UTC m=+10.090782329,LastTimestamp:2026-01-21 18:14:12.676819878 +0000 UTC m=+10.090782329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.502335 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a33e9e6484 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.710204548 +0000 UTC m=+10.124167009,LastTimestamp:2026-01-21 18:14:12.710204548 +0000 UTC m=+10.124167009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.509286 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a33eb0463e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.711376446 +0000 UTC m=+10.125338907,LastTimestamp:2026-01-21 18:14:12.711376446 +0000 UTC m=+10.125338907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.515014 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a36dac8322 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:13.499659042 +0000 UTC m=+10.913621503,LastTimestamp:2026-01-21 18:14:13.499659042 +0000 UTC m=+10.913621503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.520472 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a37079832f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:13.546648367 +0000 UTC m=+10.960610828,LastTimestamp:2026-01-21 18:14:13.546648367 +0000 UTC m=+10.960610828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.524546 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a3709740f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:13.548597495 +0000 UTC m=+10.962559966,LastTimestamp:2026-01-21 18:14:13.548597495 +0000 UTC m=+10.962559966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.532238 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a3910d5659 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:14.093207129 +0000 UTC m=+11.507169590,LastTimestamp:2026-01-21 18:14:14.093207129 +0000 UTC m=+11.507169590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.543282 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a3936e1bcd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:14.133103565 +0000 UTC m=+11.547066026,LastTimestamp:2026-01-21 18:14:14.133103565 +0000 UTC m=+11.547066026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.554265 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a393c93c05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:14.139075589 +0000 UTC m=+11.553038050,LastTimestamp:2026-01-21 18:14:14.139075589 +0000 UTC m=+11.553038050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.560618 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a3ad9c3b78 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:14.572333944 +0000 UTC m=+11.986296405,LastTimestamp:2026-01-21 18:14:14.572333944 +0000 UTC m=+11.986296405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.566010 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cd1a3ae4f6857 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:14.584076375 +0000 UTC m=+11.998038836,LastTimestamp:2026-01-21 18:14:14.584076375 +0000 UTC m=+11.998038836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.571581 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188cd1a4297d0b43 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 21 18:14:30 crc kubenswrapper[5099]: body: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:16.650664771 +0000 UTC m=+14.064627262,LastTimestamp:2026-01-21 18:14:16.650664771 +0000 UTC m=+14.064627262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.577526 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a4297f8a58 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:16.650828376 +0000 UTC m=+14.064790867,LastTimestamp:2026-01-21 18:14:16.650828376 +0000 UTC m=+14.064790867,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: I0121 18:14:30.585212 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.585156 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a2d947309f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2d947309f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.009990815 +0000 UTC m=+8.423953276,LastTimestamp:2026-01-21 18:14:17.268804369 +0000 UTC m=+14.682766830,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.589389 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a320557513\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a320557513 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.202108179 +0000 UTC m=+9.616070630,LastTimestamp:2026-01-21 18:14:17.548389321 +0000 UTC m=+14.962351802,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.594126 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a322ce4342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a322ce4342 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.243579714 +0000 UTC m=+9.657542175,LastTimestamp:2026-01-21 18:14:17.569833164 +0000 UTC m=+14.983795625,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.599422 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188cd1a6179c35c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 18:14:30 crc kubenswrapper[5099]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 18:14:30 crc kubenswrapper[5099]: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:24.940651975 +0000 UTC m=+22.354614436,LastTimestamp:2026-01-21 18:14:24.940651975 +0000 UTC m=+22.354614436,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.607601 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a6179d47f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:24.940722166 +0000 UTC m=+22.354684637,LastTimestamp:2026-01-21 18:14:24.940722166 +0000 UTC m=+22.354684637,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.612779 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a6179c35c7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188cd1a6179c35c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 18:14:30 crc kubenswrapper[5099]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 18:14:30 crc kubenswrapper[5099]: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:24.940651975 +0000 UTC m=+22.354614436,LastTimestamp:2026-01-21 18:14:24.945729597 +0000 UTC m=+22.359692068,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.617103 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a6179d47f6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a6179d47f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:24.940722166 +0000 UTC m=+22.354684637,LastTimestamp:2026-01-21 18:14:24.945841559 +0000 UTC m=+22.359804040,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.623520 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188cd1a67d9a03a5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 21 18:14:30 crc kubenswrapper[5099]: body: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:26.651784101 +0000 UTC m=+24.065746652,LastTimestamp:2026-01-21 18:14:26.651784101 +0000 UTC m=+24.065746652,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.628223 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cd1a67d9c1ea4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:26.651922084 +0000 UTC m=+24.065884585,LastTimestamp:2026-01-21 18:14:26.651922084 +0000 UTC m=+24.065884585,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.633783 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188cd1a6ceae32ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 18:14:30 crc kubenswrapper[5099]: body: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:28.012061357 +0000 UTC m=+25.426023818,LastTimestamp:2026-01-21 18:14:28.012061357 +0000 UTC m=+25.426023818,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.637569 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a6ceaed6be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:28.012103358 +0000 UTC m=+25.426065819,LastTimestamp:2026-01-21 18:14:28.012103358 +0000 UTC m=+25.426065819,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.642241 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a6ceae32ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 18:14:30 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188cd1a6ceae32ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 18:14:30 crc kubenswrapper[5099]: body: Jan 21 18:14:30 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:28.012061357 +0000 UTC m=+25.426023818,LastTimestamp:2026-01-21 18:14:28.298682028 +0000 UTC m=+25.712644499,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 18:14:30 crc kubenswrapper[5099]: > Jan 21 18:14:30 crc kubenswrapper[5099]: E0121 18:14:30.646150 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a6ceaed6be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a6ceaed6be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:28.012103358 +0000 UTC m=+25.426065819,LastTimestamp:2026-01-21 18:14:28.298724509 +0000 UTC m=+25.712686970,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.034153 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.034478 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.035614 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.035668 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.035685 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:31 crc kubenswrapper[5099]: E0121 18:14:31.036255 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.046481 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.307915 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.308494 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.310340 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6" exitCode=255 Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.310433 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6"} Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.310504 5099 scope.go:117] "RemoveContainer" containerID="d98b3ac9b8f7864e0e251f84b7166574ccc7613b3ef8a6094c6d5c24b8d6ca02" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.310657 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.310986 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311541 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311557 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311925 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311968 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:31 crc kubenswrapper[5099]: E0121 18:14:31.311986 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.311993 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:31 crc kubenswrapper[5099]: E0121 18:14:31.312443 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.312884 5099 scope.go:117] "RemoveContainer" containerID="0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6" Jan 21 18:14:31 crc kubenswrapper[5099]: E0121 18:14:31.313120 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:14:31 crc kubenswrapper[5099]: E0121 18:14:31.318565 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a7936fba3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,LastTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:31 crc kubenswrapper[5099]: I0121 18:14:31.584615 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.251332 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.313912 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.315676 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.316290 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.316350 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.316361 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:32 crc kubenswrapper[5099]: E0121 18:14:32.316879 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.317342 5099 scope.go:117] "RemoveContainer" containerID="0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6" Jan 21 18:14:32 crc kubenswrapper[5099]: E0121 18:14:32.317651 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:14:32 crc kubenswrapper[5099]: E0121 18:14:32.322141 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a7936fba3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a7936fba3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,LastTimestamp:2026-01-21 18:14:32.317607082 +0000 UTC m=+29.731569543,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:32 crc kubenswrapper[5099]: I0121 18:14:32.611117 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.583534 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.655801 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.656071 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.657032 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.657101 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.657122 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:33 crc kubenswrapper[5099]: E0121 18:14:33.657594 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:33 crc kubenswrapper[5099]: I0121 18:14:33.660807 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:14:33 crc kubenswrapper[5099]: E0121 18:14:33.994232 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:34 crc kubenswrapper[5099]: I0121 18:14:34.320450 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:34 crc kubenswrapper[5099]: I0121 18:14:34.321133 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:34 crc kubenswrapper[5099]: I0121 18:14:34.321180 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:34 crc kubenswrapper[5099]: I0121 18:14:34.321197 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:34 crc kubenswrapper[5099]: E0121 18:14:34.321678 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:34 crc kubenswrapper[5099]: I0121 18:14:34.584169 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:35 crc kubenswrapper[5099]: I0121 18:14:35.585019 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.582456 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.947052 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.948101 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.948149 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.948160 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:36 crc kubenswrapper[5099]: I0121 18:14:36.948185 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:36 crc kubenswrapper[5099]: E0121 18:14:36.954477 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:14:36 crc kubenswrapper[5099]: E0121 18:14:36.960960 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:14:37 crc kubenswrapper[5099]: I0121 18:14:37.583514 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:38 crc kubenswrapper[5099]: I0121 18:14:38.585237 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:39 crc kubenswrapper[5099]: I0121 18:14:39.583500 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:40 crc kubenswrapper[5099]: I0121 18:14:40.583598 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:41 crc kubenswrapper[5099]: I0121 18:14:41.582300 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:42 crc kubenswrapper[5099]: I0121 18:14:42.583496 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.583778 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.961705 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.962549 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.962587 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.962600 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:43 crc kubenswrapper[5099]: I0121 18:14:43.962627 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:43 crc kubenswrapper[5099]: E0121 18:14:43.965087 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:14:43 crc kubenswrapper[5099]: E0121 18:14:43.970930 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:14:43 crc kubenswrapper[5099]: E0121 18:14:43.995605 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:44 crc kubenswrapper[5099]: I0121 18:14:44.583719 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:44 crc kubenswrapper[5099]: E0121 18:14:44.745891 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 18:14:45 crc kubenswrapper[5099]: E0121 18:14:45.168769 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 18:14:45 crc kubenswrapper[5099]: I0121 18:14:45.589260 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.585603 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.912939 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.914133 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.914186 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.914203 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:46 crc kubenswrapper[5099]: E0121 18:14:46.914641 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:46 crc kubenswrapper[5099]: I0121 18:14:46.914981 5099 scope.go:117] "RemoveContainer" containerID="0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6" Jan 21 18:14:46 crc kubenswrapper[5099]: E0121 18:14:46.925982 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a2d947309f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2d947309f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.009990815 +0000 UTC m=+8.423953276,LastTimestamp:2026-01-21 18:14:46.916341175 +0000 UTC m=+44.330303636,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:47 crc kubenswrapper[5099]: E0121 18:14:47.110210 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a320557513\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a320557513 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.202108179 +0000 UTC m=+9.616070630,LastTimestamp:2026-01-21 18:14:47.104342401 +0000 UTC m=+44.518304862,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:47 crc kubenswrapper[5099]: E0121 18:14:47.120542 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a322ce4342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a322ce4342 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:12.243579714 +0000 UTC m=+9.657542175,LastTimestamp:2026-01-21 18:14:47.115043278 +0000 UTC m=+44.529005739,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.356089 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.358377 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7"} Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.358570 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.359270 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.359353 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.359366 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:47 crc kubenswrapper[5099]: E0121 18:14:47.359791 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:47 crc kubenswrapper[5099]: I0121 18:14:47.586251 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.364162 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.365622 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.368623 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" exitCode=255 Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.368726 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7"} Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.368814 5099 scope.go:117] "RemoveContainer" containerID="0365d543f1255285ea5494d8794b264c0e22001f25e080150e4520836b258be6" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.369118 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.370046 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.370261 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.370442 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:48 crc kubenswrapper[5099]: E0121 18:14:48.372011 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.372623 5099 scope.go:117] "RemoveContainer" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" Jan 21 18:14:48 crc kubenswrapper[5099]: E0121 18:14:48.373098 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:14:48 crc kubenswrapper[5099]: E0121 18:14:48.380692 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a7936fba3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a7936fba3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,LastTimestamp:2026-01-21 18:14:48.373022252 +0000 UTC m=+45.786984743,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:48 crc kubenswrapper[5099]: I0121 18:14:48.584209 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:48 crc kubenswrapper[5099]: E0121 18:14:48.692705 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 18:14:49 crc kubenswrapper[5099]: I0121 18:14:49.373585 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 18:14:49 crc kubenswrapper[5099]: I0121 18:14:49.585122 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.589166 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.971392 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.973251 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.973308 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.973329 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:50 crc kubenswrapper[5099]: I0121 18:14:50.973366 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:50 crc kubenswrapper[5099]: E0121 18:14:50.974029 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:14:50 crc kubenswrapper[5099]: E0121 18:14:50.990785 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:14:51 crc kubenswrapper[5099]: I0121 18:14:51.583835 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.251714 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.252129 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.253181 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.253352 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.253381 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:52 crc kubenswrapper[5099]: E0121 18:14:52.254041 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.254463 5099 scope.go:117] "RemoveContainer" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" Jan 21 18:14:52 crc kubenswrapper[5099]: E0121 18:14:52.255010 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:14:52 crc kubenswrapper[5099]: E0121 18:14:52.260954 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a7936fba3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a7936fba3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,LastTimestamp:2026-01-21 18:14:52.254883249 +0000 UTC m=+49.668845740,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:52 crc kubenswrapper[5099]: I0121 18:14:52.582641 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.586090 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.810691 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.811291 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.814862 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.814931 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:53 crc kubenswrapper[5099]: I0121 18:14:53.814951 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:53 crc kubenswrapper[5099]: E0121 18:14:53.815457 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:53 crc kubenswrapper[5099]: E0121 18:14:53.996283 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:14:54 crc kubenswrapper[5099]: I0121 18:14:54.584055 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:55 crc kubenswrapper[5099]: E0121 18:14:55.033291 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 18:14:55 crc kubenswrapper[5099]: I0121 18:14:55.583858 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:56 crc kubenswrapper[5099]: I0121 18:14:56.583018 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.359514 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.359862 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.361071 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.361140 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.361160 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:57 crc kubenswrapper[5099]: E0121 18:14:57.361825 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.362276 5099 scope.go:117] "RemoveContainer" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" Jan 21 18:14:57 crc kubenswrapper[5099]: E0121 18:14:57.362616 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:14:57 crc kubenswrapper[5099]: E0121 18:14:57.370579 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a7936fba3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a7936fba3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:31.313078844 +0000 UTC m=+28.727041315,LastTimestamp:2026-01-21 18:14:57.362564632 +0000 UTC m=+54.776527133,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.583582 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:57 crc kubenswrapper[5099]: E0121 18:14:57.980650 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.991149 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.992346 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.992396 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.992406 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:14:57 crc kubenswrapper[5099]: I0121 18:14:57.992434 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:14:58 crc kubenswrapper[5099]: E0121 18:14:58.003427 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:14:58 crc kubenswrapper[5099]: I0121 18:14:58.583878 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:14:59 crc kubenswrapper[5099]: I0121 18:14:59.584011 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:00 crc kubenswrapper[5099]: I0121 18:15:00.584785 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:01 crc kubenswrapper[5099]: I0121 18:15:01.583854 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:02 crc kubenswrapper[5099]: I0121 18:15:02.586999 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:03 crc kubenswrapper[5099]: I0121 18:15:03.582058 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:03 crc kubenswrapper[5099]: E0121 18:15:03.996579 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:15:04 crc kubenswrapper[5099]: I0121 18:15:04.583282 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:04 crc kubenswrapper[5099]: E0121 18:15:04.987816 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.004085 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.004878 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.004908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.004918 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.004940 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:15:05 crc kubenswrapper[5099]: E0121 18:15:05.019629 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 18:15:05 crc kubenswrapper[5099]: I0121 18:15:05.585659 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:06 crc kubenswrapper[5099]: I0121 18:15:06.582954 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:07 crc kubenswrapper[5099]: I0121 18:15:07.583580 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:08 crc kubenswrapper[5099]: I0121 18:15:08.584053 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.579303 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.913046 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.914091 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.914182 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.914198 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:09 crc kubenswrapper[5099]: E0121 18:15:09.914619 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:09 crc kubenswrapper[5099]: I0121 18:15:09.914934 5099 scope.go:117] "RemoveContainer" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" Jan 21 18:15:09 crc kubenswrapper[5099]: E0121 18:15:09.922472 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cd1a2d947309f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cd1a2d947309f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:14:11.009990815 +0000 UTC m=+8.423953276,LastTimestamp:2026-01-21 18:15:09.916164548 +0000 UTC m=+67.330127009,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.295149 5099 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wlvzv" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.303124 5099 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wlvzv" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.327165 5099 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.410806 5099 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.432459 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.434194 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994"} Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.434398 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.435243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.435275 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:10 crc kubenswrapper[5099]: I0121 18:15:10.435284 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:10 crc kubenswrapper[5099]: E0121 18:15:10.435647 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.305232 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-20 18:10:10 +0000 UTC" deadline="2026-02-17 04:43:03.761322973 +0000 UTC" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.305311 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="634h27m52.456016273s" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.439408 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.440410 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.442225 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" exitCode=255 Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.442274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994"} Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.442316 5099 scope.go:117] "RemoveContainer" containerID="826eb8ff813d051fd23ef43f1b4137c7466d05d85d0ebfaed6471fcc40f698a7" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.442782 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.443776 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.443866 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.443888 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:11 crc kubenswrapper[5099]: E0121 18:15:11.444860 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:11 crc kubenswrapper[5099]: I0121 18:15:11.445200 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:11 crc kubenswrapper[5099]: E0121 18:15:11.445639 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.019761 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.020760 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.020807 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.020818 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.020933 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.029313 5099 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.029820 5099 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.029925 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.033210 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.033244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.033254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.033267 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.033277 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:12Z","lastTransitionTime":"2026-01-21T18:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.050046 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.059826 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.059925 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.059937 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.059980 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.059993 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:12Z","lastTransitionTime":"2026-01-21T18:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.073564 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.081890 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.081945 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.081958 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.081977 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.081987 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:12Z","lastTransitionTime":"2026-01-21T18:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.093231 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.100418 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.100724 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.100831 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.100921 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.100995 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:12Z","lastTransitionTime":"2026-01-21T18:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.112149 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.112314 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.112350 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.213097 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.251455 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.313772 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.414603 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.446802 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.449219 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.449956 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.450027 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.450051 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.451126 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:12 crc kubenswrapper[5099]: I0121 18:15:12.451659 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.452019 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.515549 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.616717 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.717819 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.818773 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:12 crc kubenswrapper[5099]: E0121 18:15:12.919230 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.019706 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.120368 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.221370 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.322601 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.422944 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.523428 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.623787 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.724807 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.825679 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.925898 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:13 crc kubenswrapper[5099]: E0121 18:15:13.997752 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.026284 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.126694 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.227786 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.328882 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.429187 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.529909 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.630220 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.731463 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.831993 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:14 crc kubenswrapper[5099]: E0121 18:15:14.932526 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.032697 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.133542 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.234272 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.335210 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.435329 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.536231 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.637086 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.737619 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.838071 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:15 crc kubenswrapper[5099]: I0121 18:15:15.913486 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:15 crc kubenswrapper[5099]: I0121 18:15:15.914388 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:15 crc kubenswrapper[5099]: I0121 18:15:15.914470 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:15 crc kubenswrapper[5099]: I0121 18:15:15.914486 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.915035 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:15 crc kubenswrapper[5099]: E0121 18:15:15.938222 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.038918 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.139467 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.240775 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.341265 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.442127 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.543270 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.643639 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.744700 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.845755 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:16 crc kubenswrapper[5099]: E0121 18:15:16.946360 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.047477 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.148439 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.249638 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.350257 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.451413 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.551836 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.653004 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.754205 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.855185 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:17 crc kubenswrapper[5099]: E0121 18:15:17.956290 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.057398 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.157849 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.257996 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.358457 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.458855 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.559416 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.660287 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.760471 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.860677 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:18 crc kubenswrapper[5099]: E0121 18:15:18.961579 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.061714 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.162318 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.263121 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.364186 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.464390 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.564979 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.665701 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.766053 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.867484 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:19 crc kubenswrapper[5099]: E0121 18:15:19.967993 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.069048 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.169799 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.270725 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.371711 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.435968 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.436500 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.437446 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.437482 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.437492 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.437890 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.438138 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.438333 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.472223 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.572344 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.595513 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.672914 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.773979 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: E0121 18:15:20.874827 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.880580 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.889055 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.899805 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.977389 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.977672 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.977801 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.977908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:20 crc kubenswrapper[5099]: I0121 18:15:20.978004 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:20Z","lastTransitionTime":"2026-01-21T18:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.003644 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.080270 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.080356 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.080369 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.080387 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.080399 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.101158 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.182426 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.182478 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.182495 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.182516 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.182534 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.200855 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.285210 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.285258 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.285272 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.285290 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.285301 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.387468 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.387539 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.387556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.387580 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.387597 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.490604 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.490677 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.490703 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.490835 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.490876 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.593296 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.593351 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.593369 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.593389 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.593402 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.655989 5099 apiserver.go:52] "Watching apiserver" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.673654 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.674627 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-svjkb","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/network-metrics-daemon-tsdhb","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-dns/node-resolver-s88dj","openshift-image-registry/node-ca-2q8ng","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-hsl47","openshift-multus/multus-6pvpm","openshift-multus/multus-additional-cni-plugins-bb5lc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt"] Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.676052 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.677157 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.677346 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.678016 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.678357 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.678910 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.679933 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.680222 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.681570 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.681712 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.681893 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.681967 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.684133 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.684547 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.684637 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.684874 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.686179 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.686408 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.695815 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.695871 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.695887 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.695908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.695922 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.703852 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.706827 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.706896 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.706933 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.706998 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707150 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707204 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707232 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707258 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707290 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b19b831f-eaf0-4c77-859b-84eb9a5f233c-rootfs\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707324 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.707351 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.707970 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.708066 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.208042511 +0000 UTC m=+79.622004972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708128 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708183 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b19b831f-eaf0-4c77-859b-84eb9a5f233c-proxy-tls\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708207 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b19b831f-eaf0-4c77-859b-84eb9a5f233c-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.708355 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.708413 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.208401131 +0000 UTC m=+79.622363592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708672 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknfh\" (UniqueName: \"kubernetes.io/projected/b19b831f-eaf0-4c77-859b-84eb9a5f233c-kube-api-access-gknfh\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.708696 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.710024 5099 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.711880 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.714647 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.718670 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.720575 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.723822 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.723863 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.723877 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.723975 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.223951023 +0000 UTC m=+79.637913484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.725351 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.728425 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.728606 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.729099 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.732275 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.732316 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.732332 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.732424 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.232397082 +0000 UTC m=+79.646359563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.733975 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.734392 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.734544 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.737337 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.741233 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.743968 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744144 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744312 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744340 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744349 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744631 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.744910 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.748103 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.748181 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.748191 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.748596 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.748865 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.749034 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.749082 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.749284 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.749338 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.751164 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.751468 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.752422 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.752495 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.752533 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.752573 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.752588 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.759903 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.761607 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.763633 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.763812 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.768390 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.770942 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.771347 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.771973 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.772986 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.773149 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.773751 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.774214 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.774465 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.774834 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.777867 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.779474 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.784535 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.789864 5099 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798393 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798445 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798456 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798479 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798492 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.798989 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809172 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809215 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809260 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809277 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809298 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809315 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809365 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809381 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809396 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809416 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809434 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809450 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809469 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809488 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809504 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809521 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809536 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809554 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809573 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809589 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809605 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809624 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809642 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809685 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809707 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809724 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809753 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809787 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809805 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809823 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809841 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809856 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809872 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809891 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809905 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809921 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809938 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809956 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809974 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.809991 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810009 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810032 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810048 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810066 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810115 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810135 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810151 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810167 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810190 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810210 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810226 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810241 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810257 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810300 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810320 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810337 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810359 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810377 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810396 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810412 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810427 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810448 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810466 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810481 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810500 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810530 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810563 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810581 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810599 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810615 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810633 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810657 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.810684 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811037 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811208 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811650 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811694 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811812 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.811888 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812048 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812063 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812292 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812494 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812570 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812917 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.812978 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813087 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813185 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813255 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813485 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813551 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813723 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813747 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813785 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.813952 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814224 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814264 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814613 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814704 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814765 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814951 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.814966 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815217 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815540 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815606 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815689 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815907 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.815953 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816004 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816379 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816419 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816392 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816626 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816889 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816972 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817068 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817182 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816574 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816934 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.816968 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817645 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817658 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817706 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817743 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817763 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817781 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817799 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818054 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818167 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818179 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817194 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817389 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817517 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818242 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818530 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.818677 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819030 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819075 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819088 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819090 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.817084 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819299 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819324 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819627 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.819660 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.820127 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.820236 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.820564 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.820945 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.820971 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821008 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821035 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821057 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821079 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821254 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821262 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821292 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821335 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821344 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821381 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821428 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821536 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821567 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821593 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821624 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821650 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821695 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821723 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821833 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821859 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821883 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821888 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821908 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821931 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821957 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821981 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.821993 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822005 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822058 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822066 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822255 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822284 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822664 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822802 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822832 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.822872 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823111 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823177 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823234 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823267 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823359 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823445 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823572 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823760 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823869 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824022 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824094 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824103 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824197 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.823892 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824824 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824857 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824884 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824912 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824932 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824930 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824953 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824975 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824963 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.824995 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825133 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825165 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825203 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825228 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825249 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825279 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825306 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825336 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825360 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825365 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825388 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825424 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825456 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825483 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825513 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825561 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825585 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825607 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825633 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825659 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825687 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825719 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825796 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825859 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825886 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825973 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826035 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826064 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826089 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826117 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826144 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826167 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826191 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826215 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826242 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826271 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826295 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826327 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826383 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825388 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825664 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825672 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825688 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.825439 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826622 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826683 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826677 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826898 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.826994 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.827191 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.827268 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.827335 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.827799 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828096 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828103 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828340 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828815 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828932 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.828831 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829059 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829786 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829816 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829855 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829859 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829928 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829956 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.829990 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830013 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830047 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830503 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830022 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830592 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830886 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830905 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830940 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830943 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.830982 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831017 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831046 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831078 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831106 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831113 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831121 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831137 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831185 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831231 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831334 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831434 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831680 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831840 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831883 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.831913 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832114 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832288 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832477 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833348 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832505 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832559 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833437 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833477 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832420 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833493 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833508 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833537 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833566 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833591 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833612 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833636 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833667 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833689 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833712 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833799 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833818 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833840 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833862 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833884 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833904 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833926 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.832811 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833066 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835718 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833114 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833124 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833135 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833278 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833526 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833830 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.833994 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834311 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835842 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834534 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834614 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.834667 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.334639649 +0000 UTC m=+79.748602120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835913 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835935 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835954 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835974 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835986 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836001 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836021 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836041 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836062 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836079 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836099 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836100 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836118 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836143 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836150 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836164 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836246 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836308 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836316 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836471 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834679 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834700 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.834726 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835010 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835041 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835167 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836681 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.836690 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835206 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835408 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835645 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835669 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835536 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.835527 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837215 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837376 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837535 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b19b831f-eaf0-4c77-859b-84eb9a5f233c-proxy-tls\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837585 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837622 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/57ef6e89-3637-4516-a464-973f45d9ed03-serviceca\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837754 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837770 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837777 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837805 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837810 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837843 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss74r\" (UniqueName: \"kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837872 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837895 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9c2\" (UniqueName: \"kubernetes.io/projected/20fc4331-f128-4a9a-b77f-85af1cf094cf-kube-api-access-7x9c2\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837965 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.837992 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838022 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75f6x\" (UniqueName: \"kubernetes.io/projected/fedcb6dd-93e2-4530-b748-52a296d7809d-kube-api-access-75f6x\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838066 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-os-release\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838110 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-socket-dir-parent\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838133 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-k8s-cni-cncf-io\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838160 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-netns\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838186 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xzx\" (UniqueName: \"kubernetes.io/projected/57ef6e89-3637-4516-a464-973f45d9ed03-kube-api-access-k6xzx\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838203 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838219 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838284 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838381 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838402 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838448 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-kubelet\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838466 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838494 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838514 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-bin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838530 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-multus-daemon-config\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838588 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838603 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-multus\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838631 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b19b831f-eaf0-4c77-859b-84eb9a5f233c-rootfs\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838650 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ef6e89-3637-4516-a464-973f45d9ed03-host\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838667 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20fc4331-f128-4a9a-b77f-85af1cf094cf-hosts-file\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838685 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838702 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838718 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838776 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b19b831f-eaf0-4c77-859b-84eb9a5f233c-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838828 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.838885 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b19b831f-eaf0-4c77-859b-84eb9a5f233c-rootfs\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839059 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839094 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839111 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839127 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839608 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b19b831f-eaf0-4c77-859b-84eb9a5f233c-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.839896 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840343 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840420 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840424 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gknfh\" (UniqueName: \"kubernetes.io/projected/b19b831f-eaf0-4c77-859b-84eb9a5f233c-kube-api-access-gknfh\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840517 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840440 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840590 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840684 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.840891 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841044 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841066 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-os-release\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841278 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841390 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-hostroot\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841499 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841606 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841196 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841730 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-system-cni-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841809 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841935 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842161 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-cni-binary-copy\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842335 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842426 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842527 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842446 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842530 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842792 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-multus-certs\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842971 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843095 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20fc4331-f128-4a9a-b77f-85af1cf094cf-tmp-dir\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843241 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842802 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843379 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842911 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.842964 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.841110 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843229 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843379 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843933 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-system-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844048 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtl6\" (UniqueName: \"kubernetes.io/projected/d9b34413-4767-4d59-b13b-8f882453977a-kube-api-access-8mtl6\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844160 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844265 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-cnibin\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844374 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-conf-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844479 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-etc-kubernetes\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844626 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844752 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843817 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843846 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844226 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.843699 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844813 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.844845 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-cnibin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845012 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhnt\" (UniqueName: \"kubernetes.io/projected/0d26f0ad-829f-4f64-82b5-1292bd2316f0-kube-api-access-ghhnt\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845432 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845474 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845592 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xvb\" (UniqueName: \"kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845681 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845721 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845670 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845953 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.845997 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.846063 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.846836 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.847194 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.847309 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.847928 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.847958 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.848330 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.848502 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849055 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849095 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849418 5099 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849440 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849455 5099 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849469 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849483 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849498 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849515 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849528 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849542 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849555 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849569 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849583 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849597 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849612 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849625 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849638 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849652 5099 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849664 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849677 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849690 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849706 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849720 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849749 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849762 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849777 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849791 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849804 5099 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849817 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849829 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849842 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849856 5099 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849868 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849884 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849897 5099 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849909 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849924 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849938 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849951 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849964 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849977 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.849990 5099 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850003 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850016 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850028 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850040 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850053 5099 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850066 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850079 5099 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850092 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850106 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850119 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850132 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850147 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850160 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850173 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850216 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850230 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850244 5099 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850257 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850270 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850287 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850302 5099 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850317 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850329 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850340 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850353 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850367 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850379 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850391 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850404 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850417 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850458 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850473 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850486 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850498 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850509 5099 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850522 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850533 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850545 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850556 5099 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850569 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850581 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850594 5099 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850605 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850617 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850628 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850656 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850667 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850679 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850691 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850703 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850715 5099 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850727 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850760 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850777 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850790 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850802 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850816 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850830 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850843 5099 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850854 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850866 5099 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850878 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850888 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850900 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850915 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850927 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850937 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850948 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850960 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850970 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850982 5099 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.850993 5099 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851004 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851015 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851027 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851039 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851051 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851062 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851076 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851090 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851105 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851115 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851127 5099 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851140 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851152 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851167 5099 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851179 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851193 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851205 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851218 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851229 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851242 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851254 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851267 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851278 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851290 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851302 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851317 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851328 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851340 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851351 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851362 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851373 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851384 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851390 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b19b831f-eaf0-4c77-859b-84eb9a5f233c-proxy-tls\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851394 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851460 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851472 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851483 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851495 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851510 5099 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851520 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851534 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851545 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851556 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851565 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851575 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851586 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851597 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851609 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851619 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851633 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851645 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851657 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851667 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851678 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851689 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851698 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851709 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851720 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851730 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851760 5099 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851770 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851783 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851793 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851803 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851813 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851823 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851833 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851844 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851855 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851866 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851878 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851889 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851900 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851911 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851922 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851933 5099 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851943 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851953 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851966 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851977 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851989 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.851999 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852010 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852021 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852032 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852044 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852056 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.852067 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.854628 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.854747 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.855970 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.856769 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.862150 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.863439 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gknfh\" (UniqueName: \"kubernetes.io/projected/b19b831f-eaf0-4c77-859b-84eb9a5f233c-kube-api-access-gknfh\") pod \"machine-config-daemon-hsl47\" (UID: \"b19b831f-eaf0-4c77-859b-84eb9a5f233c\") " pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.869202 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.879239 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.882022 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.888640 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.892460 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.896179 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.901018 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.901069 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.901080 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.901100 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.901114 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:21Z","lastTransitionTime":"2026-01-21T18:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.907115 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.917810 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.918765 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.920902 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.920878 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.922409 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.924529 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.926643 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.928107 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.929449 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.930076 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.931844 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.932909 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.933376 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.934458 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.935248 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.937795 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.938672 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.939762 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.941661 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.943511 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.943930 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.944343 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.945655 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.946609 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.948349 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.949022 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.950313 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.951025 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.952329 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953283 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ef6e89-3637-4516-a464-973f45d9ed03-host\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953323 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20fc4331-f128-4a9a-b77f-85af1cf094cf-hosts-file\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953347 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953369 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953392 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.953522 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953525 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ef6e89-3637-4516-a464-973f45d9ed03-host\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: E0121 18:15:21.953591 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:22.453571817 +0000 UTC m=+79.867534298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953749 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20fc4331-f128-4a9a-b77f-85af1cf094cf-hosts-file\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953833 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953850 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953867 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953887 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953904 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-os-release\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953946 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-hostroot\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-system-cni-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953987 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.953998 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954021 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-cni-binary-copy\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954044 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-multus-certs\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954049 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954066 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954088 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20fc4331-f128-4a9a-b77f-85af1cf094cf-tmp-dir\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954110 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954132 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954151 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-system-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954172 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtl6\" (UniqueName: \"kubernetes.io/projected/d9b34413-4767-4d59-b13b-8f882453977a-kube-api-access-8mtl6\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954194 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954215 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-cnibin\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-conf-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954260 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-etc-kubernetes\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954283 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954303 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954322 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-cnibin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954342 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhnt\" (UniqueName: \"kubernetes.io/projected/0d26f0ad-829f-4f64-82b5-1292bd2316f0-kube-api-access-ghhnt\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954382 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s7xvb\" (UniqueName: \"kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954412 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/57ef6e89-3637-4516-a464-973f45d9ed03-serviceca\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954431 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ss74r\" (UniqueName: \"kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954475 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954496 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9c2\" (UniqueName: \"kubernetes.io/projected/20fc4331-f128-4a9a-b77f-85af1cf094cf-kube-api-access-7x9c2\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954517 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954537 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954557 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75f6x\" (UniqueName: \"kubernetes.io/projected/fedcb6dd-93e2-4530-b748-52a296d7809d-kube-api-access-75f6x\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954577 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-os-release\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-socket-dir-parent\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954615 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-k8s-cni-cncf-io\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954635 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-netns\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954660 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-etc-kubernetes\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954664 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xzx\" (UniqueName: \"kubernetes.io/projected/57ef6e89-3637-4516-a464-973f45d9ed03-kube-api-access-k6xzx\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954708 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954756 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954803 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954844 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954864 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954892 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-kubelet\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954911 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954936 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954944 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954956 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-bin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954993 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-multus-daemon-config\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955017 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955036 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955057 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-hostroot\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955090 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955122 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-system-cni-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955417 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955640 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-binary-copy\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955704 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-cnibin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955935 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-multus-daemon-config\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.956020 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.956039 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b34413-4767-4d59-b13b-8f882453977a-cni-binary-copy\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.956089 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-multus-certs\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.956119 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-os-release\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.954575 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.955060 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.956823 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.957344 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/57ef6e89-3637-4516-a464-973f45d9ed03-serviceca\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958115 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-cnibin\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958090 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-kubelet\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958173 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958218 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958236 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958220 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958266 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958272 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-system-cni-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958173 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-bin\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958305 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958302 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-k8s-cni-cncf-io\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958359 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-socket-dir-parent\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958535 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958502 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-run-netns\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958615 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-multus\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960015 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960033 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960045 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960060 5099 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960071 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960081 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958910 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958173 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.959401 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.959429 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20fc4331-f128-4a9a-b77f-85af1cf094cf-tmp-dir\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.959523 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fedcb6dd-93e2-4530-b748-52a296d7809d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.959588 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958618 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-os-release\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958612 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958644 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-host-var-lib-cni-multus\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958650 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958718 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b34413-4767-4d59-b13b-8f882453977a-multus-conf-dir\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.958771 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960334 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960415 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960428 5099 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960443 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960455 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960467 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960478 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960489 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960500 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960512 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960525 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960563 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960574 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960584 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960594 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.960604 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.962716 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.963707 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.964274 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.964924 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.965094 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.966255 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.966712 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fedcb6dd-93e2-4530-b748-52a296d7809d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.967720 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.969157 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.970059 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.971132 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.971962 5099 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.972094 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.972317 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xzx\" (UniqueName: \"kubernetes.io/projected/57ef6e89-3637-4516-a464-973f45d9ed03-kube-api-access-k6xzx\") pod \"node-ca-2q8ng\" (UID: \"57ef6e89-3637-4516-a464-973f45d9ed03\") " pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.974015 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss74r\" (UniqueName: \"kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r\") pod \"ovnkube-node-svjkb\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.975486 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7xvb\" (UniqueName: \"kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb\") pod \"ovnkube-control-plane-57b78d8988-nxrc9\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.976639 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.977828 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.977929 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.980178 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtl6\" (UniqueName: \"kubernetes.io/projected/d9b34413-4767-4d59-b13b-8f882453977a-kube-api-access-8mtl6\") pod \"multus-6pvpm\" (UID: \"d9b34413-4767-4d59-b13b-8f882453977a\") " pod="openshift-multus/multus-6pvpm" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.980300 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.981725 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.982700 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.983757 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.983853 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhnt\" (UniqueName: \"kubernetes.io/projected/0d26f0ad-829f-4f64-82b5-1292bd2316f0-kube-api-access-ghhnt\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.984156 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9c2\" (UniqueName: \"kubernetes.io/projected/20fc4331-f128-4a9a-b77f-85af1cf094cf-kube-api-access-7x9c2\") pod \"node-resolver-s88dj\" (UID: \"20fc4331-f128-4a9a-b77f-85af1cf094cf\") " pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.985601 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.986250 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75f6x\" (UniqueName: \"kubernetes.io/projected/fedcb6dd-93e2-4530-b748-52a296d7809d-kube-api-access-75f6x\") pod \"multus-additional-cni-plugins-bb5lc\" (UID: \"fedcb6dd-93e2-4530-b748-52a296d7809d\") " pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.986277 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.987415 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.989274 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.989257 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.990345 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.993696 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.995133 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.996502 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.997385 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.998218 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.998894 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 18:15:21 crc kubenswrapper[5099]: I0121 18:15:21.999613 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.002294 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.002334 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003520 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003581 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003583 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003598 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003765 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.003794 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.006043 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.007940 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.008543 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.008613 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-17db8a23a15b96cb184c6c691c822dc477cba21d48db986b480d163f8465a7d4 WatchSource:0}: Error finding container 17db8a23a15b96cb184c6c691c822dc477cba21d48db986b480d163f8465a7d4: Status 404 returned error can't find the container with id 17db8a23a15b96cb184c6c691c822dc477cba21d48db986b480d163f8465a7d4 Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.012109 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: source /etc/kubernetes/apiserver-url.env Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.013339 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.016113 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.017209 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-805e78a0ff7366f7def9af2f9c16dfb1cc9ac54dd1cc791d667f7d0dfa14488d WatchSource:0}: Error finding container 805e78a0ff7366f7def9af2f9c16dfb1cc9ac54dd1cc791d667f7d0dfa14488d: Status 404 returned error can't find the container with id 805e78a0ff7366f7def9af2f9c16dfb1cc9ac54dd1cc791d667f7d0dfa14488d Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.021753 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 18:15:22 crc kubenswrapper[5099]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 18:15:22 crc kubenswrapper[5099]: ho_enable="--enable-hybrid-overlay" Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 18:15:22 crc kubenswrapper[5099]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 18:15:22 crc kubenswrapper[5099]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-host=127.0.0.1 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-port=9743 \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ho_enable} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-interconnect \ Jan 21 18:15:22 crc kubenswrapper[5099]: --disable-approver \ Jan 21 18:15:22 crc kubenswrapper[5099]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --wait-for-kubernetes-api=200s \ Jan 21 18:15:22 crc kubenswrapper[5099]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel="${LOGLEVEL}" Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.025459 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.025566 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --disable-webhook \ Jan 21 18:15:22 crc kubenswrapper[5099]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel="${LOGLEVEL}" Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.026702 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.026708 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.027995 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.044195 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.061685 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.063647 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.073020 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.073183 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.077262 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb19b831f_eaf0_4c77_859b_84eb9a5f233c.slice/crio-77808ea4f92aab1c042ef924a98d4d797887f6e016d47919f2d50e3c5a1f8945 WatchSource:0}: Error finding container 77808ea4f92aab1c042ef924a98d4d797887f6e016d47919f2d50e3c5a1f8945: Status 404 returned error can't find the container with id 77808ea4f92aab1c042ef924a98d4d797887f6e016d47919f2d50e3c5a1f8945 Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.079718 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gknfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.082823 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gknfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.084697 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.086193 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2q8ng" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.086628 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7521550_bc40_43eb_bcb0_f563416d810b.slice/crio-c326c766374cb3d8f2394017baca2bb66b98a85298a8696e57a7f70208606df7 WatchSource:0}: Error finding container c326c766374cb3d8f2394017baca2bb66b98a85298a8696e57a7f70208606df7: Status 404 returned error can't find the container with id c326c766374cb3d8f2394017baca2bb66b98a85298a8696e57a7f70208606df7 Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.092396 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 21 18:15:22 crc kubenswrapper[5099]: apiVersion: v1 Jan 21 18:15:22 crc kubenswrapper[5099]: clusters: Jan 21 18:15:22 crc kubenswrapper[5099]: - cluster: Jan 21 18:15:22 crc kubenswrapper[5099]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: server: https://api-int.crc.testing:6443 Jan 21 18:15:22 crc kubenswrapper[5099]: name: default-cluster Jan 21 18:15:22 crc kubenswrapper[5099]: contexts: Jan 21 18:15:22 crc kubenswrapper[5099]: - context: Jan 21 18:15:22 crc kubenswrapper[5099]: cluster: default-cluster Jan 21 18:15:22 crc kubenswrapper[5099]: namespace: default Jan 21 18:15:22 crc kubenswrapper[5099]: user: default-auth Jan 21 18:15:22 crc kubenswrapper[5099]: name: default-context Jan 21 18:15:22 crc kubenswrapper[5099]: current-context: default-context Jan 21 18:15:22 crc kubenswrapper[5099]: kind: Config Jan 21 18:15:22 crc kubenswrapper[5099]: preferences: {} Jan 21 18:15:22 crc kubenswrapper[5099]: users: Jan 21 18:15:22 crc kubenswrapper[5099]: - name: default-auth Jan 21 18:15:22 crc kubenswrapper[5099]: user: Jan 21 18:15:22 crc kubenswrapper[5099]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 18:15:22 crc kubenswrapper[5099]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 18:15:22 crc kubenswrapper[5099]: EOF Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss74r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-svjkb_openshift-ovn-kubernetes(d7521550-bc40-43eb-bcb0-f563416d810b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.093911 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.094000 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s88dj" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.102135 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6pvpm" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.107113 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.107169 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.107184 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.107204 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.107217 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.109660 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.113521 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 21 18:15:22 crc kubenswrapper[5099]: while [ true ]; Jan 21 18:15:22 crc kubenswrapper[5099]: do Jan 21 18:15:22 crc kubenswrapper[5099]: for f in $(ls /tmp/serviceca); do Jan 21 18:15:22 crc kubenswrapper[5099]: echo $f Jan 21 18:15:22 crc kubenswrapper[5099]: ca_file_path="/tmp/serviceca/${f}" Jan 21 18:15:22 crc kubenswrapper[5099]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 21 18:15:22 crc kubenswrapper[5099]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 21 18:15:22 crc kubenswrapper[5099]: if [ -e "${reg_dir_path}" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: mkdir $reg_dir_path Jan 21 18:15:22 crc kubenswrapper[5099]: cp $ca_file_path $reg_dir_path/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: for d in $(ls /etc/docker/certs.d); do Jan 21 18:15:22 crc kubenswrapper[5099]: echo $d Jan 21 18:15:22 crc kubenswrapper[5099]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 21 18:15:22 crc kubenswrapper[5099]: reg_conf_path="/tmp/serviceca/${dp}" Jan 21 18:15:22 crc kubenswrapper[5099]: if [ ! -e "${reg_conf_path}" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: rm -rf /etc/docker/certs.d/$d Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait ${!} Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-2q8ng_openshift-image-registry(57ef6e89-3637-4516-a464-973f45d9ed03): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.114672 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-2q8ng" podUID="57ef6e89-3637-4516-a464-973f45d9ed03" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.121039 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -uo pipefail Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 21 18:15:22 crc kubenswrapper[5099]: HOSTS_FILE="/etc/hosts" Jan 21 18:15:22 crc kubenswrapper[5099]: TEMP_FILE="/tmp/hosts.tmp" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Make a temporary file with the old hosts file's attributes. Jan 21 18:15:22 crc kubenswrapper[5099]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Failed to preserve hosts file. Exiting." Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: while true; do Jan 21 18:15:22 crc kubenswrapper[5099]: declare -A svc_ips Jan 21 18:15:22 crc kubenswrapper[5099]: for svc in "${services[@]}"; do Jan 21 18:15:22 crc kubenswrapper[5099]: # Fetch service IP from cluster dns if present. We make several tries Jan 21 18:15:22 crc kubenswrapper[5099]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 21 18:15:22 crc kubenswrapper[5099]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 21 18:15:22 crc kubenswrapper[5099]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 21 18:15:22 crc kubenswrapper[5099]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 21 18:15:22 crc kubenswrapper[5099]: for i in ${!cmds[*]} Jan 21 18:15:22 crc kubenswrapper[5099]: do Jan 21 18:15:22 crc kubenswrapper[5099]: ips=($(eval "${cmds[i]}")) Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: svc_ips["${svc}"]="${ips[@]}" Jan 21 18:15:22 crc kubenswrapper[5099]: break Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Update /etc/hosts only if we get valid service IPs Jan 21 18:15:22 crc kubenswrapper[5099]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 21 18:15:22 crc kubenswrapper[5099]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 21 18:15:22 crc kubenswrapper[5099]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 21 18:15:22 crc kubenswrapper[5099]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: continue Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Append resolver entries for services Jan 21 18:15:22 crc kubenswrapper[5099]: rc=0 Jan 21 18:15:22 crc kubenswrapper[5099]: for svc in "${!svc_ips[@]}"; do Jan 21 18:15:22 crc kubenswrapper[5099]: for ip in ${svc_ips[${svc}]}; do Jan 21 18:15:22 crc kubenswrapper[5099]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ $rc -ne 0 ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: continue Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 21 18:15:22 crc kubenswrapper[5099]: # Replace /etc/hosts with our modified version if needed Jan 21 18:15:22 crc kubenswrapper[5099]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 21 18:15:22 crc kubenswrapper[5099]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: unset svc_ips Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7x9c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-s88dj_openshift-dns(20fc4331-f128-4a9a-b77f-85af1cf094cf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.121643 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b34413_4767_4d59_b13b_8f882453977a.slice/crio-976375463f976d3df1b5d234c691b3c82188075eee429ef8922acffde1c909e1 WatchSource:0}: Error finding container 976375463f976d3df1b5d234c691b3c82188075eee429ef8922acffde1c909e1: Status 404 returned error can't find the container with id 976375463f976d3df1b5d234c691b3c82188075eee429ef8922acffde1c909e1 Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.122214 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-s88dj" podUID="20fc4331-f128-4a9a-b77f-85af1cf094cf" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.123413 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.124352 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 21 18:15:22 crc kubenswrapper[5099]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 21 18:15:22 crc kubenswrapper[5099]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mtl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6pvpm_openshift-multus(d9b34413-4767-4d59-b13b-8f882453977a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.125586 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6pvpm" podUID="d9b34413-4767-4d59-b13b-8f882453977a" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.126594 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfedcb6dd_93e2_4530_b748_52a296d7809d.slice/crio-4602b356b36301bd887c14a676a88650f483da2f646b82c1d3375acf5d9f664a WatchSource:0}: Error finding container 4602b356b36301bd887c14a676a88650f483da2f646b82c1d3375acf5d9f664a: Status 404 returned error can't find the container with id 4602b356b36301bd887c14a676a88650f483da2f646b82c1d3375acf5d9f664a Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.128976 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bb5lc_openshift-multus(fedcb6dd-93e2-4530-b748-52a296d7809d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.131020 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" podUID="fedcb6dd-93e2-4530-b748-52a296d7809d" Jan 21 18:15:22 crc kubenswrapper[5099]: W0121 18:15:22.135076 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd3b8a6d_69a8_4079_a747_f379b71bcafe.slice/crio-c8d1a7ce264822b7a9ad6dbef2f4955a6a24275a032d785aa0c77b41a055c3b9 WatchSource:0}: Error finding container c8d1a7ce264822b7a9ad6dbef2f4955a6a24275a032d785aa0c77b41a055c3b9: Status 404 returned error can't find the container with id c8d1a7ce264822b7a9ad6dbef2f4955a6a24275a032d785aa0c77b41a055c3b9 Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.138156 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -euo pipefail Jan 21 18:15:22 crc kubenswrapper[5099]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 21 18:15:22 crc kubenswrapper[5099]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 21 18:15:22 crc kubenswrapper[5099]: # As the secret mount is optional we must wait for the files to be present. Jan 21 18:15:22 crc kubenswrapper[5099]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 21 18:15:22 crc kubenswrapper[5099]: TS=$(date +%s) Jan 21 18:15:22 crc kubenswrapper[5099]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 21 18:15:22 crc kubenswrapper[5099]: HAS_LOGGED_INFO=0 Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: log_missing_certs(){ Jan 21 18:15:22 crc kubenswrapper[5099]: CUR_TS=$(date +%s) Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 21 18:15:22 crc kubenswrapper[5099]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 21 18:15:22 crc kubenswrapper[5099]: HAS_LOGGED_INFO=1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: } Jan 21 18:15:22 crc kubenswrapper[5099]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 21 18:15:22 crc kubenswrapper[5099]: log_missing_certs Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 5 Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/kube-rbac-proxy \ Jan 21 18:15:22 crc kubenswrapper[5099]: --logtostderr \ Jan 21 18:15:22 crc kubenswrapper[5099]: --secure-listen-address=:9108 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --upstream=http://127.0.0.1:29108/ \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-private-key-file=${TLS_PK} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-cert-file=${TLS_CERT} Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-nxrc9_openshift-ovn-kubernetes(dd3b8a6d-69a8-4079-a747-f379b71bcafe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.142379 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_join_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_join_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_transit_switch_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_transit_switch_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: dns_name_resolver_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # This is needed so that converting clusters from GA to TP Jan 21 18:15:22 crc kubenswrapper[5099]: # will rollout control plane pods as well Jan 21 18:15:22 crc kubenswrapper[5099]: network_segmentation_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag="--enable-multi-network" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" != "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag="--enable-multi-network" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: route_advertisements_enable_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: preconfigured_udn_addresses_enable_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Enable multi-network policy if configured (control-plane always full mode) Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_policy_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Enable admin network policy if configured (control-plane always full mode) Jan 21 18:15:22 crc kubenswrapper[5099]: admin_network_policy_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: if [ "shared" == "shared" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: gateway_mode_flags="--gateway-mode shared" Jan 21 18:15:22 crc kubenswrapper[5099]: elif [ "shared" == "local" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: gateway_mode_flags="--gateway-mode local" Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-interconnect \ Jan 21 18:15:22 crc kubenswrapper[5099]: --init-cluster-manager "${K8S_NODE}" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-bind-address "127.0.0.1:29108" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-enable-pprof \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-enable-config-duration \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v4_join_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v6_join_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${dns_name_resolver_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${persistent_ips_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${multi_network_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${network_segmentation_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${gateway_mode_flags} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${route_advertisements_enable_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${preconfigured_udn_addresses_enable_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-ip=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-firewall=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-qos=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-service=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-multicast \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-multi-external-gateway=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${multi_network_policy_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${admin_network_policy_enabled_flag} Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-nxrc9_openshift-ovn-kubernetes(dd3b8a6d-69a8-4079-a747-f379b71bcafe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.143756 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.209704 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.209810 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.209823 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.209849 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.209866 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.264297 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.264391 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.264457 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.264512 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264641 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264819 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.264789774 +0000 UTC m=+80.678752235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264816 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264862 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264873 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264974 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.264884 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.265000 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.265182 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.265009 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.264981019 +0000 UTC m=+80.678943510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.265270 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.265255656 +0000 UTC m=+80.679218117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.265321 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.265312327 +0000 UTC m=+80.679274788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.268298 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.268342 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.268354 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.268371 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.268382 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.286141 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.291356 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.291398 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.291410 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.291427 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.291440 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.301876 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.305552 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.305591 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.305600 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.305614 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.305626 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.319420 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.324477 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.324519 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.324533 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.324556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.324575 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.334849 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.338809 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.338850 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.338860 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.338876 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.338887 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.355224 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.355429 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.357085 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.357158 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.357173 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.357211 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.357225 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.365635 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.365867 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.365818219 +0000 UTC m=+80.779780670 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.460065 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.461488 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.461508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.461532 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.461550 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.467088 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.467271 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.467348 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:23.467328547 +0000 UTC m=+80.881291008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.476593 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s88dj" event={"ID":"20fc4331-f128-4a9a-b77f-85af1cf094cf","Type":"ContainerStarted","Data":"8a36ab13aea808f1dbd5b7e87b83fa1381da1d148598f8725e14d941af345e59"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.478618 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -uo pipefail Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 21 18:15:22 crc kubenswrapper[5099]: HOSTS_FILE="/etc/hosts" Jan 21 18:15:22 crc kubenswrapper[5099]: TEMP_FILE="/tmp/hosts.tmp" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Make a temporary file with the old hosts file's attributes. Jan 21 18:15:22 crc kubenswrapper[5099]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Failed to preserve hosts file. Exiting." Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: while true; do Jan 21 18:15:22 crc kubenswrapper[5099]: declare -A svc_ips Jan 21 18:15:22 crc kubenswrapper[5099]: for svc in "${services[@]}"; do Jan 21 18:15:22 crc kubenswrapper[5099]: # Fetch service IP from cluster dns if present. We make several tries Jan 21 18:15:22 crc kubenswrapper[5099]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 21 18:15:22 crc kubenswrapper[5099]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 21 18:15:22 crc kubenswrapper[5099]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 21 18:15:22 crc kubenswrapper[5099]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 18:15:22 crc kubenswrapper[5099]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 21 18:15:22 crc kubenswrapper[5099]: for i in ${!cmds[*]} Jan 21 18:15:22 crc kubenswrapper[5099]: do Jan 21 18:15:22 crc kubenswrapper[5099]: ips=($(eval "${cmds[i]}")) Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: svc_ips["${svc}"]="${ips[@]}" Jan 21 18:15:22 crc kubenswrapper[5099]: break Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Update /etc/hosts only if we get valid service IPs Jan 21 18:15:22 crc kubenswrapper[5099]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 21 18:15:22 crc kubenswrapper[5099]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 21 18:15:22 crc kubenswrapper[5099]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 21 18:15:22 crc kubenswrapper[5099]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: continue Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Append resolver entries for services Jan 21 18:15:22 crc kubenswrapper[5099]: rc=0 Jan 21 18:15:22 crc kubenswrapper[5099]: for svc in "${!svc_ips[@]}"; do Jan 21 18:15:22 crc kubenswrapper[5099]: for ip in ${svc_ips[${svc}]}; do Jan 21 18:15:22 crc kubenswrapper[5099]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ $rc -ne 0 ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: continue Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 21 18:15:22 crc kubenswrapper[5099]: # Replace /etc/hosts with our modified version if needed Jan 21 18:15:22 crc kubenswrapper[5099]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 21 18:15:22 crc kubenswrapper[5099]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait Jan 21 18:15:22 crc kubenswrapper[5099]: unset svc_ips Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7x9c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-s88dj_openshift-dns(20fc4331-f128-4a9a-b77f-85af1cf094cf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.479550 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2q8ng" event={"ID":"57ef6e89-3637-4516-a464-973f45d9ed03","Type":"ContainerStarted","Data":"eb936e4c4cb058b4abd557e26d112a6dc466130f493f360ed7e69ea5ade38f7b"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.480166 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-s88dj" podUID="20fc4331-f128-4a9a-b77f-85af1cf094cf" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.481120 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"c326c766374cb3d8f2394017baca2bb66b98a85298a8696e57a7f70208606df7"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.482316 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"77808ea4f92aab1c042ef924a98d4d797887f6e016d47919f2d50e3c5a1f8945"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.483582 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 21 18:15:22 crc kubenswrapper[5099]: while [ true ]; Jan 21 18:15:22 crc kubenswrapper[5099]: do Jan 21 18:15:22 crc kubenswrapper[5099]: for f in $(ls /tmp/serviceca); do Jan 21 18:15:22 crc kubenswrapper[5099]: echo $f Jan 21 18:15:22 crc kubenswrapper[5099]: ca_file_path="/tmp/serviceca/${f}" Jan 21 18:15:22 crc kubenswrapper[5099]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 21 18:15:22 crc kubenswrapper[5099]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 21 18:15:22 crc kubenswrapper[5099]: if [ -e "${reg_dir_path}" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: mkdir $reg_dir_path Jan 21 18:15:22 crc kubenswrapper[5099]: cp $ca_file_path $reg_dir_path/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: for d in $(ls /etc/docker/certs.d); do Jan 21 18:15:22 crc kubenswrapper[5099]: echo $d Jan 21 18:15:22 crc kubenswrapper[5099]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 21 18:15:22 crc kubenswrapper[5099]: reg_conf_path="/tmp/serviceca/${dp}" Jan 21 18:15:22 crc kubenswrapper[5099]: if [ ! -e "${reg_conf_path}" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: rm -rf /etc/docker/certs.d/$d Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 60 & wait ${!} Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-2q8ng_openshift-image-registry(57ef6e89-3637-4516-a464-973f45d9ed03): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.484172 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gknfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.484289 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 21 18:15:22 crc kubenswrapper[5099]: apiVersion: v1 Jan 21 18:15:22 crc kubenswrapper[5099]: clusters: Jan 21 18:15:22 crc kubenswrapper[5099]: - cluster: Jan 21 18:15:22 crc kubenswrapper[5099]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 21 18:15:22 crc kubenswrapper[5099]: server: https://api-int.crc.testing:6443 Jan 21 18:15:22 crc kubenswrapper[5099]: name: default-cluster Jan 21 18:15:22 crc kubenswrapper[5099]: contexts: Jan 21 18:15:22 crc kubenswrapper[5099]: - context: Jan 21 18:15:22 crc kubenswrapper[5099]: cluster: default-cluster Jan 21 18:15:22 crc kubenswrapper[5099]: namespace: default Jan 21 18:15:22 crc kubenswrapper[5099]: user: default-auth Jan 21 18:15:22 crc kubenswrapper[5099]: name: default-context Jan 21 18:15:22 crc kubenswrapper[5099]: current-context: default-context Jan 21 18:15:22 crc kubenswrapper[5099]: kind: Config Jan 21 18:15:22 crc kubenswrapper[5099]: preferences: {} Jan 21 18:15:22 crc kubenswrapper[5099]: users: Jan 21 18:15:22 crc kubenswrapper[5099]: - name: default-auth Jan 21 18:15:22 crc kubenswrapper[5099]: user: Jan 21 18:15:22 crc kubenswrapper[5099]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 18:15:22 crc kubenswrapper[5099]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 18:15:22 crc kubenswrapper[5099]: EOF Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss74r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-svjkb_openshift-ovn-kubernetes(d7521550-bc40-43eb-bcb0-f563416d810b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.484396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"805e78a0ff7366f7def9af2f9c16dfb1cc9ac54dd1cc791d667f7d0dfa14488d"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.484663 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-2q8ng" podUID="57ef6e89-3637-4516-a464-973f45d9ed03" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.485429 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.485576 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 18:15:22 crc kubenswrapper[5099]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 18:15:22 crc kubenswrapper[5099]: ho_enable="--enable-hybrid-overlay" Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 18:15:22 crc kubenswrapper[5099]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 18:15:22 crc kubenswrapper[5099]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-host=127.0.0.1 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --webhook-port=9743 \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ho_enable} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-interconnect \ Jan 21 18:15:22 crc kubenswrapper[5099]: --disable-approver \ Jan 21 18:15:22 crc kubenswrapper[5099]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --wait-for-kubernetes-api=200s \ Jan 21 18:15:22 crc kubenswrapper[5099]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel="${LOGLEVEL}" Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.486284 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gknfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.486444 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerStarted","Data":"c8d1a7ce264822b7a9ad6dbef2f4955a6a24275a032d785aa0c77b41a055c3b9"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.487491 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --disable-webhook \ Jan 21 18:15:22 crc kubenswrapper[5099]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel="${LOGLEVEL}" Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.487542 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.488073 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -euo pipefail Jan 21 18:15:22 crc kubenswrapper[5099]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 21 18:15:22 crc kubenswrapper[5099]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 21 18:15:22 crc kubenswrapper[5099]: # As the secret mount is optional we must wait for the files to be present. Jan 21 18:15:22 crc kubenswrapper[5099]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 21 18:15:22 crc kubenswrapper[5099]: TS=$(date +%s) Jan 21 18:15:22 crc kubenswrapper[5099]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 21 18:15:22 crc kubenswrapper[5099]: HAS_LOGGED_INFO=0 Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: log_missing_certs(){ Jan 21 18:15:22 crc kubenswrapper[5099]: CUR_TS=$(date +%s) Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 21 18:15:22 crc kubenswrapper[5099]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 21 18:15:22 crc kubenswrapper[5099]: HAS_LOGGED_INFO=1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: } Jan 21 18:15:22 crc kubenswrapper[5099]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 21 18:15:22 crc kubenswrapper[5099]: log_missing_certs Jan 21 18:15:22 crc kubenswrapper[5099]: sleep 5 Jan 21 18:15:22 crc kubenswrapper[5099]: done Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/kube-rbac-proxy \ Jan 21 18:15:22 crc kubenswrapper[5099]: --logtostderr \ Jan 21 18:15:22 crc kubenswrapper[5099]: --secure-listen-address=:9108 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 21 18:15:22 crc kubenswrapper[5099]: --upstream=http://127.0.0.1:29108/ \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-private-key-file=${TLS_PK} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --tls-cert-file=${TLS_CERT} Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-nxrc9_openshift-ovn-kubernetes(dd3b8a6d-69a8-4079-a747-f379b71bcafe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.488617 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.488882 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"329d9736bc92d8f055f94c8f951ca62d096410f230a724ab16f8a36ca0aaf6ba"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.490096 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.490987 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f "/env/_master" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: source "/env/_master" Jan 21 18:15:22 crc kubenswrapper[5099]: set +o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_join_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_join_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_transit_switch_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_transit_switch_subnet_opt= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "" != "" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: dns_name_resolver_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # This is needed so that converting clusters from GA to TP Jan 21 18:15:22 crc kubenswrapper[5099]: # will rollout control plane pods as well Jan 21 18:15:22 crc kubenswrapper[5099]: network_segmentation_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag="--enable-multi-network" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" != "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_enabled_flag="--enable-multi-network" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: route_advertisements_enable_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: preconfigured_udn_addresses_enable_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Enable multi-network policy if configured (control-plane always full mode) Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_policy_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "false" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: # Enable admin network policy if configured (control-plane always full mode) Jan 21 18:15:22 crc kubenswrapper[5099]: admin_network_policy_enabled_flag= Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ "true" == "true" ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: if [ "shared" == "shared" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: gateway_mode_flags="--gateway-mode shared" Jan 21 18:15:22 crc kubenswrapper[5099]: elif [ "shared" == "local" ]; then Jan 21 18:15:22 crc kubenswrapper[5099]: gateway_mode_flags="--gateway-mode local" Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: Jan 21 18:15:22 crc kubenswrapper[5099]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/ovnkube \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-interconnect \ Jan 21 18:15:22 crc kubenswrapper[5099]: --init-cluster-manager "${K8S_NODE}" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 21 18:15:22 crc kubenswrapper[5099]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-bind-address "127.0.0.1:29108" \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-enable-pprof \ Jan 21 18:15:22 crc kubenswrapper[5099]: --metrics-enable-config-duration \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v4_join_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v6_join_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${dns_name_resolver_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${persistent_ips_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${multi_network_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${network_segmentation_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${gateway_mode_flags} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${route_advertisements_enable_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${preconfigured_udn_addresses_enable_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-ip=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-firewall=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-qos=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-egress-service=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-multicast \ Jan 21 18:15:22 crc kubenswrapper[5099]: --enable-multi-external-gateway=true \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${multi_network_policy_enabled_flag} \ Jan 21 18:15:22 crc kubenswrapper[5099]: ${admin_network_policy_enabled_flag} Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-nxrc9_openshift-ovn-kubernetes(dd3b8a6d-69a8-4079-a747-f379b71bcafe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.491145 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"17db8a23a15b96cb184c6c691c822dc477cba21d48db986b480d163f8465a7d4"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.492218 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.492226 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.492822 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.494035 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerStarted","Data":"4602b356b36301bd887c14a676a88650f483da2f646b82c1d3375acf5d9f664a"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.496829 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6pvpm" event={"ID":"d9b34413-4767-4d59-b13b-8f882453977a","Type":"ContainerStarted","Data":"976375463f976d3df1b5d234c691b3c82188075eee429ef8922acffde1c909e1"} Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.496971 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 21 18:15:22 crc kubenswrapper[5099]: set -o allexport Jan 21 18:15:22 crc kubenswrapper[5099]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 18:15:22 crc kubenswrapper[5099]: source /etc/kubernetes/apiserver-url.env Jan 21 18:15:22 crc kubenswrapper[5099]: else Jan 21 18:15:22 crc kubenswrapper[5099]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 18:15:22 crc kubenswrapper[5099]: exit 1 Jan 21 18:15:22 crc kubenswrapper[5099]: fi Jan 21 18:15:22 crc kubenswrapper[5099]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 18:15:22 crc kubenswrapper[5099]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.497901 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bb5lc_openshift-multus(fedcb6dd-93e2-4530-b748-52a296d7809d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.498123 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.499125 5099 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 18:15:22 crc kubenswrapper[5099]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 21 18:15:22 crc kubenswrapper[5099]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 21 18:15:22 crc kubenswrapper[5099]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mtl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6pvpm_openshift-multus(d9b34413-4767-4d59-b13b-8f882453977a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 18:15:22 crc kubenswrapper[5099]: > logger="UnhandledError" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.499221 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" podUID="fedcb6dd-93e2-4530-b748-52a296d7809d" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.500223 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6pvpm" podUID="d9b34413-4767-4d59-b13b-8f882453977a" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.519219 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.532481 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.543270 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.553928 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.563973 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.564029 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.564055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.564073 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.564085 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.566401 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.582362 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.591010 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.605152 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.615687 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.624154 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.632145 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.644233 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.656482 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.666725 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.666813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.666826 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.666846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.666859 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.670366 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.680411 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.690713 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.700040 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.715721 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.737944 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.750435 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.763110 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.769989 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.770035 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.770044 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.770060 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.770069 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.772115 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.781068 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.790626 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.796625 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.824504 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.867059 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.873496 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.873554 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.873566 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.873587 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.873604 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.906241 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.913505 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.913532 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.913718 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:22 crc kubenswrapper[5099]: E0121 18:15:22.913975 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.945980 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.975442 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.975488 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.975499 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.975516 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.975528 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:22Z","lastTransitionTime":"2026-01-21T18:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:22 crc kubenswrapper[5099]: I0121 18:15:22.983565 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.027164 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.068997 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.078579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.078639 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.078656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.078679 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.078702 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.106225 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.146663 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.181820 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.181894 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.181909 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.181932 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.181946 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.188665 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.232695 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.264440 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.278207 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278352 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278518 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.278407914 +0000 UTC m=+82.692370375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.278650 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.278792 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.278875 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278895 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278958 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278982 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.278996 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279002 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279024 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.278999419 +0000 UTC m=+82.692961880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279032 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279065 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279080 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.2790488 +0000 UTC m=+82.693011261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.279107 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.279100791 +0000 UTC m=+82.693063252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.284619 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.284677 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.284689 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.284710 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.284724 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.379953 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.380228 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.380183959 +0000 UTC m=+82.794146420 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.387716 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.387856 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.387891 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.387921 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.387941 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.481632 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.481843 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.481915 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:25.481895901 +0000 UTC m=+82.895858362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.490339 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.490421 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.490445 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.490481 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.490506 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.592848 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.592902 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.592919 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.592936 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.592950 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.696048 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.696110 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.696130 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.696151 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.696166 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.799474 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.799529 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.799540 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.799557 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.799569 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.905013 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.905088 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.905103 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.905122 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.905137 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:23Z","lastTransitionTime":"2026-01-21T18:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.913708 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.913949 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.913970 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:23 crc kubenswrapper[5099]: E0121 18:15:23.914161 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.930914 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.942517 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.950972 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.959655 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.967829 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.978770 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:23 crc kubenswrapper[5099]: I0121 18:15:23.991342 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.005522 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.008008 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.008055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.008067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.008084 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.008096 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.019267 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.032290 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.047868 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.057585 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.088069 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.103807 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.110964 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.111021 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.111032 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.111052 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.111068 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.117883 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.130121 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.145567 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.160936 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.170551 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.214244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.214682 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.214795 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.214872 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.214941 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.316828 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.316897 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.316908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.316931 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.316942 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.420164 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.420535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.420607 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.420673 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.420958 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.523326 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.523933 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.524023 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.524106 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.524184 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.626368 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.626813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.626889 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.626998 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.627060 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.729651 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.730027 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.730104 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.730201 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.730379 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.832680 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.832771 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.832796 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.832820 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.832834 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.912803 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.913071 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:24 crc kubenswrapper[5099]: E0121 18:15:24.913680 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:24 crc kubenswrapper[5099]: E0121 18:15:24.913452 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.935409 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.935535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.935556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.935586 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:24 crc kubenswrapper[5099]: I0121 18:15:24.935606 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:24Z","lastTransitionTime":"2026-01-21T18:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.038769 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.038830 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.038841 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.038860 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.038872 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.142195 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.142262 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.142277 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.142299 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.142314 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.244720 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.244795 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.244805 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.244820 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.244832 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.304335 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.304400 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.304434 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.304490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.304642 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.304722 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.30470012 +0000 UTC m=+86.718662581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305001 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305029 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305047 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305053 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305086 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305101 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305086 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.305075949 +0000 UTC m=+86.719038410 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305144 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305164 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.305148821 +0000 UTC m=+86.719111282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.305335 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.305306186 +0000 UTC m=+86.719268647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.347857 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.347918 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.347929 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.347948 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.347960 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.405774 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.406077 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.406031653 +0000 UTC m=+86.819994124 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.451583 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.451644 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.451656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.451678 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.451693 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.507409 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.507685 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.507832 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:29.507799007 +0000 UTC m=+86.921761468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.554433 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.554540 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.554555 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.554579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.554601 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.657442 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.657504 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.657522 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.657544 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.657560 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.761039 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.761089 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.761100 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.761118 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.761131 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.863931 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.863980 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.863991 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.864009 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.864026 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.912802 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.912981 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.913122 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:25 crc kubenswrapper[5099]: E0121 18:15:25.914612 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.966665 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.966749 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.966763 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.966786 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:25 crc kubenswrapper[5099]: I0121 18:15:25.966832 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:25Z","lastTransitionTime":"2026-01-21T18:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.069793 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.070403 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.070535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.070689 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.070865 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.174236 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.174673 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.174844 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.174960 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.175097 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.282891 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.283365 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.283457 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.283542 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.283625 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.386925 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.386994 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.387005 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.387026 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.387037 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.490542 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.490935 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.491070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.491239 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.491387 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.594554 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.594619 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.594630 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.594650 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.594663 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.697726 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.697874 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.697884 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.697902 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.697913 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.800542 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.800607 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.800618 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.800638 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.800649 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.902910 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.902972 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.902984 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.903002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.903014 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:26Z","lastTransitionTime":"2026-01-21T18:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.913238 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:26 crc kubenswrapper[5099]: I0121 18:15:26.913335 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:26 crc kubenswrapper[5099]: E0121 18:15:26.913567 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:26 crc kubenswrapper[5099]: E0121 18:15:26.913767 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.006568 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.006697 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.006717 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.006783 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.006806 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.110806 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.111019 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.111051 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.111087 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.111113 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.214694 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.214828 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.214851 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.214884 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.214909 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.318167 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.318635 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.318710 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.318841 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.318916 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.421476 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.421548 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.421568 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.421594 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.421609 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.525096 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.525170 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.525187 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.525211 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.525230 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.627760 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.627860 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.627874 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.627899 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.627915 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.730672 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.730727 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.730758 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.730789 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.730806 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.833699 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.833789 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.833802 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.833823 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.833837 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.912948 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.912967 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:27 crc kubenswrapper[5099]: E0121 18:15:27.913199 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:27 crc kubenswrapper[5099]: E0121 18:15:27.913305 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.936104 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.936167 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.936183 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.936206 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:27 crc kubenswrapper[5099]: I0121 18:15:27.936219 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:27Z","lastTransitionTime":"2026-01-21T18:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.038758 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.038816 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.038828 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.038846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.038860 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.142101 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.142146 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.142156 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.142172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.142183 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.244985 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.245038 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.245051 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.245067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.245078 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.348170 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.348252 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.348262 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.348282 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.348297 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.451177 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.451254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.451271 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.451296 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.451313 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.554366 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.555122 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.555200 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.555327 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.555360 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.657868 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.657918 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.657928 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.657947 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.657959 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.760975 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.761052 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.761070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.761093 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.761111 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.863953 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.864002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.864016 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.864038 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.864054 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.913475 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.913492 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:28 crc kubenswrapper[5099]: E0121 18:15:28.913654 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:28 crc kubenswrapper[5099]: E0121 18:15:28.913855 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.967643 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.967758 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.967777 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.967804 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:28 crc kubenswrapper[5099]: I0121 18:15:28.967821 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:28Z","lastTransitionTime":"2026-01-21T18:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.070409 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.070479 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.070493 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.070514 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.070527 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.173518 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.173588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.173603 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.173626 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.173643 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.276430 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.276502 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.276516 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.276537 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.276554 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.350791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.350861 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.350894 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.350923 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351023 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351025 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351056 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351072 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351105 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.35108032 +0000 UTC m=+94.765042781 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351118 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.35111233 +0000 UTC m=+94.765074791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351134 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351204 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351219 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351308 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.351284675 +0000 UTC m=+94.765247136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351169 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.351359 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.351353467 +0000 UTC m=+94.765315928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.379605 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.379665 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.379677 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.379698 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.379710 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.451990 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.452388 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.452336461 +0000 UTC m=+94.866299042 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.482322 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.482379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.482392 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.482412 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.482426 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.553388 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.553702 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.553884 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:37.553852229 +0000 UTC m=+94.967814690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.584794 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.584894 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.584917 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.584943 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.584957 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.687852 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.687917 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.687938 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.687960 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.687974 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.790714 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.790884 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.790915 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.790950 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.790975 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.893687 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.893808 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.893822 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.893843 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.893856 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.913231 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.913471 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.913231 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:29 crc kubenswrapper[5099]: E0121 18:15:29.913755 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.997025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.997123 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.997153 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.997183 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:29 crc kubenswrapper[5099]: I0121 18:15:29.997205 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:29Z","lastTransitionTime":"2026-01-21T18:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.100283 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.100347 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.100366 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.100392 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.100403 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.203164 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.203236 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.203248 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.203278 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.203293 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.306248 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.306303 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.306314 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.306333 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.306345 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.409474 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.409556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.409575 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.409599 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.409614 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.512089 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.512161 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.512174 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.512194 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.512206 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.615563 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.615627 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.615640 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.615668 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.615681 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.718970 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.719055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.719076 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.719105 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.719135 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.822372 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.822434 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.822447 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.822468 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.822483 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.913696 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:30 crc kubenswrapper[5099]: E0121 18:15:30.913938 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.914126 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:30 crc kubenswrapper[5099]: E0121 18:15:30.914213 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.925235 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.925608 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.925707 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.925850 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:30 crc kubenswrapper[5099]: I0121 18:15:30.925945 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:30Z","lastTransitionTime":"2026-01-21T18:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.028991 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.029054 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.029064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.029084 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.029096 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.132100 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.132156 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.132168 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.132196 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.132212 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.235375 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.235463 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.235481 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.235510 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.235531 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.339080 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.339139 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.339153 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.339174 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.339188 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.442332 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.442439 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.442457 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.442477 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.442489 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.546142 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.546578 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.546667 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.546770 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.546858 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.649725 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.649808 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.649819 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.649838 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.649850 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.752997 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.753054 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.753065 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.753083 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.753095 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.856632 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.856699 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.856711 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.856731 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.856802 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.913203 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:31 crc kubenswrapper[5099]: E0121 18:15:31.913382 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.913417 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:31 crc kubenswrapper[5099]: E0121 18:15:31.913647 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.959942 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.959996 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.960008 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.960028 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:31 crc kubenswrapper[5099]: I0121 18:15:31.960041 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:31Z","lastTransitionTime":"2026-01-21T18:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.062130 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.062222 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.062237 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.062255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.062268 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.165132 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.165177 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.165188 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.165205 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.165216 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.268041 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.268100 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.268114 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.268135 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.268149 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.366918 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.366996 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.367016 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.367050 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.367075 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.383928 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.389064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.389123 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.389135 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.389155 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.389176 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.401333 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.406014 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.406099 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.406121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.406148 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.406169 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.418624 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.423594 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.423655 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.423674 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.423700 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.423722 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.437239 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.443204 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.443594 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.443781 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.443864 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.443957 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.455097 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a4762e-39b8-4942-bf29-6a84c0f689b6\\\",\\\"systemUUID\\\":\\\"75e1b546-23f6-45fa-956a-1002c3d2f9b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.455619 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.457422 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.457520 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.457591 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.457676 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.457753 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.560253 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.560330 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.560349 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.560396 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.560416 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.663215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.663670 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.663778 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.663912 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.663983 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.767241 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.768816 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.768906 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.768972 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.769035 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.871450 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.871510 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.871560 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.871585 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.871598 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.913807 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.914137 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.914252 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.914454 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.915135 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.915356 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.918190 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 18:15:32 crc kubenswrapper[5099]: E0121 18:15:32.919424 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.973812 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.974222 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.974308 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.974409 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:32 crc kubenswrapper[5099]: I0121 18:15:32.974494 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:32Z","lastTransitionTime":"2026-01-21T18:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.077200 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.077645 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.077720 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.077833 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.077901 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.180362 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.180410 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.180420 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.180438 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.180449 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.283084 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.283159 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.283172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.283194 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.283208 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.367901 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.385665 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.385724 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.385753 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.385774 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.385785 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.488422 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.488477 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.488490 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.488509 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.488523 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.590792 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.590851 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.590863 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.590880 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.590892 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.693158 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.693216 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.693231 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.693253 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.693266 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.795793 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.795881 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.795892 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.795910 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.795922 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.897984 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.898055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.898072 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.898093 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.898107 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:33Z","lastTransitionTime":"2026-01-21T18:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.913900 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:33 crc kubenswrapper[5099]: E0121 18:15:33.914093 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.914426 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:33 crc kubenswrapper[5099]: E0121 18:15:33.914626 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.929028 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.940392 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.960562 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.975793 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:33 crc kubenswrapper[5099]: I0121 18:15:33.988023 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.000434 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.003215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.003266 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.003307 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.003330 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.003347 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.012265 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.027457 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.041458 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.053162 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.066470 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.082140 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.104404 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.105383 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.105421 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.105432 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.105450 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.105460 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.117583 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.140345 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.155846 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.213875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.213939 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.213950 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.213971 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.213985 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.221208 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.234710 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.247127 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.316607 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.316665 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.316676 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.316694 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.316706 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.419155 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.419217 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.419229 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.419248 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.419263 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.522179 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.522243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.522255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.522277 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.522295 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.534901 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s88dj" event={"ID":"20fc4331-f128-4a9a-b77f-85af1cf094cf","Type":"ContainerStarted","Data":"5379729026cfeb608230ac78281cd992c20247a01eeabf8a89c421a69148e3f6"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.537231 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"22f3a70ab164c1851c88669109536e4211a54d72ac72ea71c2ed822857927958"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.537296 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d463ab518b7aa61baacf43c286724f8cd888ad2df20e548be8ce7fcc599bf3c5"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.558060 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.574656 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.589778 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.602899 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.615794 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.626983 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.627050 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.627070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.627097 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.627117 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.630160 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.641112 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.654717 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.729055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.729108 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.729121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.729138 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.729150 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.731485 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.743261 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.753958 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.764810 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.777320 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.788455 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.797352 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://5379729026cfeb608230ac78281cd992c20247a01eeabf8a89c421a69148e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:15:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.809282 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.821001 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.831487 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.831552 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.831566 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.831588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.831601 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.913818 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.913760 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: E0121 18:15:34.914053 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.914081 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:34 crc kubenswrapper[5099]: E0121 18:15:34.914816 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.926910 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.933810 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.933873 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.933890 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.933912 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.933927 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:34Z","lastTransitionTime":"2026-01-21T18:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.942360 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-6pvpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b34413-4767-4d59-b13b-8f882453977a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6pvpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.966018 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fedcb6dd-93e2-4530-b748-52a296d7809d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bb5lc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.979349 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a3258e1-12c7-4a69-8c70-81a224fb787f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee286e0146a85cafd23651aabbbe69ebe16248b425e092b613ba569a236b6e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c2f414962af81ed6e55d81dd5f9dd9dd326f051f9f43d3547232d163c9c8dfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:34 crc kubenswrapper[5099]: I0121 18:15:34.992797 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a40c7046-7081-492d-8099-e40a88ecf0ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95eb561748a9a0787cdcdcbb483eb6c1e2c1949db936d7d25fbfe7f9cfc5db88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c12ba2dd207284ad9505418797edb5216cc4e70217b3f68d2e9ca82396e7f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea5e150318dfc21c5ddf7304a2aa589a3a74b69339533e126143c429353ee516\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.004577 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.016233 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d26f0ad-829f-4f64-82b5-1292bd2316f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghhnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tsdhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.029765 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19b831f-eaf0-4c77-859b-84eb9a5f233c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsl47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054316 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054765 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054790 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054815 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054829 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.054947 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2q8ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57ef6e89-3637-4516-a464-973f45d9ed03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6xzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2q8ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.070102 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.084159 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.096457 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-s88dj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20fc4331-f128-4a9a-b77f-85af1cf094cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://5379729026cfeb608230ac78281cd992c20247a01eeabf8a89c421a69148e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:15:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x9c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s88dj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.106978 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f829d4ee-178c-4ccd-9dc3-d0eb0300919f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b9669ba5715cd91dacfc8e6be29f5830419da2d302adcb6b5fa29ef07eac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d7509aab4bd3b4f6d8703e94734d66e77bba951303378eb60ba01943532bfb41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fb63119f6a31701b62cf7591777dad22a4f69872d5cdb087308b8b3f6ded84d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950a43772ad6417d26292c8d55cf9857142fb1e232d309355724767d71735f95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.116384 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22f3a70ab164c1851c88669109536e4211a54d72ac72ea71c2ed822857927958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:15:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d463ab518b7aa61baacf43c286724f8cd888ad2df20e548be8ce7fcc599bf3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:15:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.130174 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7521550-bc40-43eb-bcb0-f563416d810b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss74r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-svjkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.140525 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd3b8a6d-69a8-4079-a747-f379b71bcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7xvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-nxrc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.159128 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.159193 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.159208 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.159228 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.159242 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.161122 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb04eac-bd35-447c-88ec-2f7b7296cb0e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5effc70b8488095428d6d5459b5766bcc3d5f049f11532a3e33c309d5895ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://bccc1caa9affdcec7c83cf922fb2dcf8634fb3bfe34f4d0efc602ef68e8ee7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820b29ee082671c0eb57e0818417c685033d77f07d6f3616eaa1d2fd22cfa628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://409970ffae80bc9259bb84be447a16b4506850a5ee1651f83b231fbb4e423cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ab3d8ac4f69b1d44b4ee29b2914f9f30a7e966194ce3efcfa2079bf66e522fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2410052c2fa6360fd3b85be6092bfad8e857fbf2ca993187238c13769f8832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9604f7270770e683bf5d664f618d34ca92249229df8a831af789d013e699f878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa7c8ea86015110480c177f1d09a0bacd7ac31de42f05a4e2a6691ef15f510a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.175815 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"173cce9e-0a3e-4d85-b057-083e13852fa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T18:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T18:15:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0121 18:15:10.717930 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 18:15:10.718167 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 18:15:10.719290 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2744791526/tls.crt::/tmp/serving-cert-2744791526/tls.key\\\\\\\"\\\\nI0121 18:15:10.978164 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 18:15:10.981243 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 18:15:10.981268 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 18:15:10.981347 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 18:15:10.981360 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 18:15:10.988236 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 18:15:10.988291 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 18:15:10.988264 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 18:15:10.988297 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 18:15:10.988309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 18:15:10.988312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 18:15:10.988316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 18:15:10.988319 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 18:15:10.990920 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T18:15:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T18:14:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T18:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T18:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T18:14:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.189914 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.199801 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T18:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.271774 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.271844 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.271855 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.271876 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.271889 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.374846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.374908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.374920 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.374940 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.374954 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.477415 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.477458 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.477467 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.477484 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.477495 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.579786 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.579835 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.579849 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.579867 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.579879 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.682956 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.683022 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.683033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.683067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.683081 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.786378 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.786452 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.786473 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.786497 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.786512 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.889696 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.889764 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.889777 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.889799 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.889812 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:35Z","lastTransitionTime":"2026-01-21T18:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.913483 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:35 crc kubenswrapper[5099]: I0121 18:15:35.913720 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:35 crc kubenswrapper[5099]: E0121 18:15:35.914273 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:35 crc kubenswrapper[5099]: E0121 18:15:35.914357 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.003455 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.003513 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.003527 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.003551 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.003564 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.146659 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.146717 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.146729 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.146762 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.146780 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.310757 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.310822 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.310833 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.310855 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.310867 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.558373 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.558492 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.558508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.558530 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.558548 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.561747 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.587823 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=15.587802632 podStartE2EDuration="15.587802632s" podCreationTimestamp="2026-01-21 18:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:36.586059686 +0000 UTC m=+94.000022167" watchObservedRunningTime="2026-01-21 18:15:36.587802632 +0000 UTC m=+94.001765093" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.661331 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.661379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.661392 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.661411 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.661423 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.763541 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.763593 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.763603 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.763624 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.763635 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.816486 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.816457991 podStartE2EDuration="16.816457991s" podCreationTimestamp="2026-01-21 18:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:36.814425747 +0000 UTC m=+94.228388228" watchObservedRunningTime="2026-01-21 18:15:36.816457991 +0000 UTC m=+94.230420472" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.866066 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.866122 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.866138 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.866157 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.866170 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.912760 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:36 crc kubenswrapper[5099]: E0121 18:15:36.912891 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.913719 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:36 crc kubenswrapper[5099]: E0121 18:15:36.914021 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.952487 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=15.952464951 podStartE2EDuration="15.952464951s" podCreationTimestamp="2026-01-21 18:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:36.950029878 +0000 UTC m=+94.363992339" watchObservedRunningTime="2026-01-21 18:15:36.952464951 +0000 UTC m=+94.366427422" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.953151 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.953144279 podStartE2EDuration="16.953144279s" podCreationTimestamp="2026-01-21 18:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:36.929343843 +0000 UTC m=+94.343306304" watchObservedRunningTime="2026-01-21 18:15:36.953144279 +0000 UTC m=+94.367106740" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.972004 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.972059 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.972071 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.972090 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:36 crc kubenswrapper[5099]: I0121 18:15:36.972103 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:36Z","lastTransitionTime":"2026-01-21T18:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.094704 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.094766 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.094777 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.094794 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.094803 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.196514 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.196569 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.196583 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.196601 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.196640 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.299200 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.299239 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.299254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.299270 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.299279 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.396818 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.397012 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.397062 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.397129 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397127 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397188 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397203 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397268 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397301 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.397273747 +0000 UTC m=+110.811236208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397355 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.397334928 +0000 UTC m=+110.811297379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397462 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397474 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397488 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397514 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.397507843 +0000 UTC m=+110.811470304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397562 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.397587 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.397581654 +0000 UTC m=+110.811544115 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.401776 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.401818 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.401828 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.401843 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.401854 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.498883 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.499464 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.499438891 +0000 UTC m=+110.913401352 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.504492 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.504545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.504570 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.504586 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.504596 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.565573 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.566515 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"309c46fe1912b25e89e0122225dae97f51e160875084f52e00042be74f683aca"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.568372 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerStarted","Data":"f187e7a53d271478549647f800e72e2c33a8254ae718a73fdd3f037eb93f20aa"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.571358 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6pvpm" event={"ID":"d9b34413-4767-4d59-b13b-8f882453977a","Type":"ContainerStarted","Data":"22c5bf9bc5a8e6069ae71e1c268ae1a485f69de67b5e9606ce7e353dd2c8c6c1"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.573029 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" exitCode=0 Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.573080 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.599936 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-s88dj" podStartSLOduration=66.5998304 podStartE2EDuration="1m6.5998304s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:37.068560677 +0000 UTC m=+94.482523138" watchObservedRunningTime="2026-01-21 18:15:37.5998304 +0000 UTC m=+95.013792861" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.600117 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.602115 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.602211 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:15:53.602193431 +0000 UTC m=+111.016155892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.610060 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.610122 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.610136 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.610157 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.610174 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.616092 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6pvpm" podStartSLOduration=66.616069261 podStartE2EDuration="1m6.616069261s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:37.615159287 +0000 UTC m=+95.029121748" watchObservedRunningTime="2026-01-21 18:15:37.616069261 +0000 UTC m=+95.030031722" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.712986 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.713031 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.713042 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.713058 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.713069 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.817393 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.817431 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.817445 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.817458 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.817467 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.928890 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.929025 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.929253 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:37 crc kubenswrapper[5099]: E0121 18:15:37.929501 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.934870 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.934926 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.934938 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.934956 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:37 crc kubenswrapper[5099]: I0121 18:15:37.934969 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:37Z","lastTransitionTime":"2026-01-21T18:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.038120 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.038150 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.038159 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.038173 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.038184 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.146851 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.146899 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.146912 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.146930 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.146941 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.249606 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.249657 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.249668 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.249687 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.249699 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.351312 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.351346 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.351355 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.351369 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.351379 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.453286 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.453348 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.453362 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.453384 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.453398 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.555144 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.555184 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.555194 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.555212 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.555223 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.585429 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.585487 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.585501 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.587337 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"febba360ca9877ef3262201cc8c69f26d8e940e010aaefb0edb49fa307728e5e"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.589941 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerStarted","Data":"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.589971 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerStarted","Data":"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.596662 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="f187e7a53d271478549647f800e72e2c33a8254ae718a73fdd3f037eb93f20aa" exitCode=0 Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.596779 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"f187e7a53d271478549647f800e72e2c33a8254ae718a73fdd3f037eb93f20aa"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.599710 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2q8ng" event={"ID":"57ef6e89-3637-4516-a464-973f45d9ed03","Type":"ContainerStarted","Data":"350620d6f02e0037cb4a517d8cc876a300d3f78c6f4a0dd8a0139ef02bb7e86f"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.606687 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podStartSLOduration=67.606665955 podStartE2EDuration="1m7.606665955s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:38.60608712 +0000 UTC m=+96.020049581" watchObservedRunningTime="2026-01-21 18:15:38.606665955 +0000 UTC m=+96.020628416" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.649510 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2q8ng" podStartSLOduration=67.649492283 podStartE2EDuration="1m7.649492283s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:38.6489474 +0000 UTC m=+96.062909861" watchObservedRunningTime="2026-01-21 18:15:38.649492283 +0000 UTC m=+96.063454754" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.657003 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.657045 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.657054 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.657070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.657081 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.668268 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" podStartSLOduration=67.668242939 podStartE2EDuration="1m7.668242939s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:38.667434518 +0000 UTC m=+96.081396979" watchObservedRunningTime="2026-01-21 18:15:38.668242939 +0000 UTC m=+96.082205400" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.760364 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.760419 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.760434 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.760451 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.760462 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.863700 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.863772 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.863787 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.864002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.864015 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.912923 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:38 crc kubenswrapper[5099]: E0121 18:15:38.913040 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.913371 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:38 crc kubenswrapper[5099]: E0121 18:15:38.913427 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.966204 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.966242 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.966252 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.966267 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:38 crc kubenswrapper[5099]: I0121 18:15:38.966276 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:38Z","lastTransitionTime":"2026-01-21T18:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.072911 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.073149 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.073158 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.073170 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.073179 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.181633 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.181686 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.181699 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.181720 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.181765 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.287079 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.287144 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.287159 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.287177 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.287189 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.389523 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.389581 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.389593 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.389610 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.389620 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.492495 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.492553 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.492568 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.492588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.492599 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.595686 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.596085 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.596108 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.596130 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.596141 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.629237 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerStarted","Data":"52db981c989007c120d15f74ede2edd267621313a85c4f08bc1c6481c52df71d"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.661530 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.661876 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.661959 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.758979 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.759233 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.759379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.759474 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.759564 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.861872 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.862923 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.863326 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.863511 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.863638 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.912998 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:39 crc kubenswrapper[5099]: E0121 18:15:39.913426 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.913537 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:39 crc kubenswrapper[5099]: E0121 18:15:39.913689 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.969257 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.969315 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.969325 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.969345 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:39 crc kubenswrapper[5099]: I0121 18:15:39.969357 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:39Z","lastTransitionTime":"2026-01-21T18:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.093400 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.093797 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.093813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.093835 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.093847 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.197278 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.197364 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.197382 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.197403 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.197421 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.306201 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.306418 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.306513 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.306589 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.306651 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.409524 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.409583 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.409593 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.409611 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.409622 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.512447 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.512517 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.512545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.512567 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.512580 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.617147 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.617230 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.617244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.617271 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.617286 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.676971 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="52db981c989007c120d15f74ede2edd267621313a85c4f08bc1c6481c52df71d" exitCode=0 Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.677056 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"52db981c989007c120d15f74ede2edd267621313a85c4f08bc1c6481c52df71d"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.726831 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.727070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.727082 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.727099 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.727113 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.836006 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.836073 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.836088 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.836109 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.836120 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.913916 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.913974 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:40 crc kubenswrapper[5099]: E0121 18:15:40.914086 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:40 crc kubenswrapper[5099]: E0121 18:15:40.914182 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.941723 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.941836 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.941853 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.941875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:40 crc kubenswrapper[5099]: I0121 18:15:40.941900 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:40Z","lastTransitionTime":"2026-01-21T18:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.054412 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.072041 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.072102 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.072113 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.072130 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.072162 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.175922 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.175990 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.176004 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.176035 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.176048 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.299007 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.299092 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.299105 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.299126 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.299159 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.402116 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.402193 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.402206 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.402227 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.402255 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.505504 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.505551 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.505562 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.505579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.505591 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.607455 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.607508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.607519 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.607536 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.607546 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.687606 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.689783 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="fd129b6dfaea85978e95ff4815b06a26a157418c984ed7bdfd59a319029a7e08" exitCode=0 Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.689823 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"fd129b6dfaea85978e95ff4815b06a26a157418c984ed7bdfd59a319029a7e08"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.718821 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.719367 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.719384 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.719403 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.719416 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.834192 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.834281 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.834297 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.834316 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.834332 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:41Z","lastTransitionTime":"2026-01-21T18:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.915663 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:41 crc kubenswrapper[5099]: E0121 18:15:41.915916 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:41 crc kubenswrapper[5099]: I0121 18:15:41.916480 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:41 crc kubenswrapper[5099]: E0121 18:15:41.916563 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.029930 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.029991 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.030005 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.030025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.030038 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.133388 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.133441 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.133453 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.133473 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.133483 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.240377 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.240810 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.240830 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.240849 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.240861 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.344240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.344283 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.344293 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.344310 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.344321 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.446915 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.446961 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.446972 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.446987 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.447001 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.615299 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.615355 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.615368 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.615389 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.615402 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.631115 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.631174 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.631191 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.631212 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.631226 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T18:15:42Z","lastTransitionTime":"2026-01-21T18:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.681565 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk"] Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.759279 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.794047 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.843302 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.847132 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.847467 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.847504 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.847796 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.913601 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.913676 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:42 crc kubenswrapper[5099]: E0121 18:15:42.913906 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:42 crc kubenswrapper[5099]: E0121 18:15:42.914119 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.945864 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.945940 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.946086 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.946299 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:42 crc kubenswrapper[5099]: I0121 18:15:42.946376 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048193 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048301 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048348 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048379 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048411 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048756 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.048895 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.050589 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.058971 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.075339 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3392e8a0-3d8c-4ed1-ab91-c1d7d402265a-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgxdk\" (UID: \"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.167010 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" Jan 21 18:15:43 crc kubenswrapper[5099]: W0121 18:15:43.182578 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3392e8a0_3d8c_4ed1_ab91_c1d7d402265a.slice/crio-534b5e90e7929ca831bb7ba7bbdce5aa69be4cd88bb4c25d2aa1ad510b23937c WatchSource:0}: Error finding container 534b5e90e7929ca831bb7ba7bbdce5aa69be4cd88bb4c25d2aa1ad510b23937c: Status 404 returned error can't find the container with id 534b5e90e7929ca831bb7ba7bbdce5aa69be4cd88bb4c25d2aa1ad510b23937c Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.699847 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="c1d0e15dcc342111d5a13495fb72531c0e62b903169be38942d382b5bee8b004" exitCode=0 Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.699962 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"c1d0e15dcc342111d5a13495fb72531c0e62b903169be38942d382b5bee8b004"} Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.702924 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" event={"ID":"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a","Type":"ContainerStarted","Data":"534b5e90e7929ca831bb7ba7bbdce5aa69be4cd88bb4c25d2aa1ad510b23937c"} Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.930748 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:43 crc kubenswrapper[5099]: E0121 18:15:43.931001 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:43 crc kubenswrapper[5099]: I0121 18:15:43.933023 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:43 crc kubenswrapper[5099]: E0121 18:15:43.933104 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.733964 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerStarted","Data":"d2ad26f400117833e7aba3bb808d14e6e25fce52e04cf3ccb8e92a0828e7188f"} Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.741856 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerStarted","Data":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.742230 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.742426 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.742680 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.743877 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" event={"ID":"3392e8a0-3d8c-4ed1-ab91-c1d7d402265a","Type":"ContainerStarted","Data":"5762c664ea037d23b89bde248597962bcffe7b28db78f69022209df3289ee4a1"} Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.782071 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgxdk" podStartSLOduration=73.78205343 podStartE2EDuration="1m13.78205343s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:44.781372313 +0000 UTC m=+102.195334784" watchObservedRunningTime="2026-01-21 18:15:44.78205343 +0000 UTC m=+102.196015891" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.793049 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.793168 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.861618 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podStartSLOduration=73.861589549 podStartE2EDuration="1m13.861589549s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:44.819070729 +0000 UTC m=+102.233033190" watchObservedRunningTime="2026-01-21 18:15:44.861589549 +0000 UTC m=+102.275552010" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.912824 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:44 crc kubenswrapper[5099]: E0121 18:15:44.913258 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.912824 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:44 crc kubenswrapper[5099]: E0121 18:15:44.913352 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:44 crc kubenswrapper[5099]: I0121 18:15:44.913382 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:44 crc kubenswrapper[5099]: E0121 18:15:44.913532 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 18:15:45 crc kubenswrapper[5099]: I0121 18:15:45.915041 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:45 crc kubenswrapper[5099]: E0121 18:15:45.915178 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:45 crc kubenswrapper[5099]: I0121 18:15:45.915619 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:45 crc kubenswrapper[5099]: E0121 18:15:45.915683 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:46 crc kubenswrapper[5099]: I0121 18:15:46.925627 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:46 crc kubenswrapper[5099]: E0121 18:15:46.925826 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:46 crc kubenswrapper[5099]: I0121 18:15:46.925627 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:46 crc kubenswrapper[5099]: E0121 18:15:46.926326 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:47 crc kubenswrapper[5099]: I0121 18:15:47.913583 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:47 crc kubenswrapper[5099]: I0121 18:15:47.913612 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:47 crc kubenswrapper[5099]: E0121 18:15:47.913861 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:47 crc kubenswrapper[5099]: E0121 18:15:47.914040 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:48 crc kubenswrapper[5099]: I0121 18:15:48.913568 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:48 crc kubenswrapper[5099]: E0121 18:15:48.913762 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:48 crc kubenswrapper[5099]: I0121 18:15:48.913786 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:48 crc kubenswrapper[5099]: E0121 18:15:48.913981 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:49 crc kubenswrapper[5099]: I0121 18:15:49.924539 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:49 crc kubenswrapper[5099]: E0121 18:15:49.924869 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:49 crc kubenswrapper[5099]: I0121 18:15:49.925493 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:49 crc kubenswrapper[5099]: E0121 18:15:49.925565 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:50 crc kubenswrapper[5099]: I0121 18:15:50.913091 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:50 crc kubenswrapper[5099]: E0121 18:15:50.913224 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:50 crc kubenswrapper[5099]: I0121 18:15:50.913405 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:50 crc kubenswrapper[5099]: E0121 18:15:50.913651 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:51 crc kubenswrapper[5099]: I0121 18:15:51.913012 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:51 crc kubenswrapper[5099]: E0121 18:15:51.913304 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:51 crc kubenswrapper[5099]: I0121 18:15:51.913458 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:51 crc kubenswrapper[5099]: E0121 18:15:51.913671 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.468870 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tsdhb"] Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.866755 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="d2ad26f400117833e7aba3bb808d14e6e25fce52e04cf3ccb8e92a0828e7188f" exitCode=0 Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.866847 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"d2ad26f400117833e7aba3bb808d14e6e25fce52e04cf3ccb8e92a0828e7188f"} Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.867036 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:52 crc kubenswrapper[5099]: E0121 18:15:52.867206 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.913576 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:52 crc kubenswrapper[5099]: E0121 18:15:52.913763 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:52 crc kubenswrapper[5099]: I0121 18:15:52.913998 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:52 crc kubenswrapper[5099]: E0121 18:15:52.914058 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.465982 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.466333 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.466592 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.466615 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.466791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.466874 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.466825988 +0000 UTC m=+142.880788539 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.467120 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.467223 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467414 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467467 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467492 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467509 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467402 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467556 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.467540406 +0000 UTC m=+142.881502877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467607 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.467576647 +0000 UTC m=+142.881539128 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.467642 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.467626938 +0000 UTC m=+142.881589399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.568248 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.568398 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.568372036 +0000 UTC m=+142.982334497 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.670323 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.670533 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.670626 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs podName:0d26f0ad-829f-4f64-82b5-1292bd2316f0 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:25.670600823 +0000 UTC m=+143.084563284 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs") pod "network-metrics-daemon-tsdhb" (UID: "0d26f0ad-829f-4f64-82b5-1292bd2316f0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.871817 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"62e9d80fcb057d25a2e6ba567b52cae06ba6f64e78738aa707dab1e55ce72f36"} Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.879899 5099 generic.go:358] "Generic (PLEG): container finished" podID="fedcb6dd-93e2-4530-b748-52a296d7809d" containerID="a9efa61500948dc394db117e0c82dbbe441b1182b299468f775df78b830702db" exitCode=0 Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.879973 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerDied","Data":"a9efa61500948dc394db117e0c82dbbe441b1182b299468f775df78b830702db"} Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.913755 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.913942 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:53 crc kubenswrapper[5099]: I0121 18:15:53.913783 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:53 crc kubenswrapper[5099]: E0121 18:15:53.914308 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:54 crc kubenswrapper[5099]: I0121 18:15:54.888361 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" event={"ID":"fedcb6dd-93e2-4530-b748-52a296d7809d","Type":"ContainerStarted","Data":"cdfa191fb62259d33f13da879f4d04c698532c444f93004f3059d5697e7b3e0b"} Jan 21 18:15:54 crc kubenswrapper[5099]: I0121 18:15:54.913119 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:54 crc kubenswrapper[5099]: I0121 18:15:54.913466 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:54 crc kubenswrapper[5099]: E0121 18:15:54.913834 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:54 crc kubenswrapper[5099]: E0121 18:15:54.913986 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:55 crc kubenswrapper[5099]: I0121 18:15:55.913822 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:55 crc kubenswrapper[5099]: I0121 18:15:55.913822 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:55 crc kubenswrapper[5099]: E0121 18:15:55.914546 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:55 crc kubenswrapper[5099]: E0121 18:15:55.914604 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:56 crc kubenswrapper[5099]: I0121 18:15:56.913658 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:56 crc kubenswrapper[5099]: E0121 18:15:56.914000 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 18:15:56 crc kubenswrapper[5099]: I0121 18:15:56.913716 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:56 crc kubenswrapper[5099]: E0121 18:15:56.914312 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.913564 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.913651 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:57 crc kubenswrapper[5099]: E0121 18:15:57.913822 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 18:15:57 crc kubenswrapper[5099]: E0121 18:15:57.914059 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tsdhb" podUID="0d26f0ad-829f-4f64-82b5-1292bd2316f0" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.917686 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.917895 5099 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.953019 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bb5lc" podStartSLOduration=86.952989674 podStartE2EDuration="1m26.952989674s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:15:54.917032121 +0000 UTC m=+112.330994582" watchObservedRunningTime="2026-01-21 18:15:57.952989674 +0000 UTC m=+115.366952135" Jan 21 18:15:57 crc kubenswrapper[5099]: I0121 18:15:57.954400 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.320419 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-84k5t"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.320557 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.322931 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.323192 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.323640 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.323706 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.323843 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.324198 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.324187 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.327161 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.330034 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.331466 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.336544 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.336974 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337470 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337590 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337651 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337709 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337833 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.337906 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.338842 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.338883 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-d55cs"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.339020 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.339578 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.343048 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-477z9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.343206 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.343300 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.343577 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.345899 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.346506 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.346714 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.346937 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347046 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347171 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347260 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347323 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347490 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.347516 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.349332 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-btpkr"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.349569 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.349963 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.350068 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.350162 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.351768 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.353014 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.353353 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.353014 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.356798 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357048 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357505 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357518 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357709 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357709 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.357976 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.360595 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.360827 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.361497 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.363456 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.370382 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.370872 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.371044 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.371270 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.371271 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.381151 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382053 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382140 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382064 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382313 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382332 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382606 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382761 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382790 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382821 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.382891 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383204 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lhgtf"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383472 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383586 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383723 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383513 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383861 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383759 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.384066 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.384004 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.383997 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.385603 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.385873 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.386097 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.389156 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.389529 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.390057 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.393411 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.394555 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.394786 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.395616 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.396052 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.399693 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.399720 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400028 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400128 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400285 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400324 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400437 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400636 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.400704 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.401881 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.405380 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.405835 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.405067 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6qnjf"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.410059 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.411181 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.414239 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.414337 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.418412 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.418589 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.419229 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.419349 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.419465 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.419992 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.420003 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.420296 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.420481 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.420637 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.421203 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.422962 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.423032 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xfrc5"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.424011 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.427220 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.430290 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.435044 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.435761 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437009 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-audit\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437055 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wr5h\" (UniqueName: \"kubernetes.io/projected/0494dafa-d272-45bf-a11e-7ca78f92223d-kube-api-access-6wr5h\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437085 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b4e369-201a-410c-a66c-9612fc9fafa8-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437117 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ee64e319-d2fd-4a23-808e-a4ab684a16af-available-featuregates\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437147 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgtx\" (UniqueName: \"kubernetes.io/projected/ee64e319-d2fd-4a23-808e-a4ab684a16af-kube-api-access-cdgtx\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437175 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437204 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437334 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmm4\" (UniqueName: \"kubernetes.io/projected/287553f1-f80f-47bb-8a01-1930cd0e5d2c-kube-api-access-svmm4\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437424 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437462 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437492 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99b91a3-bde7-4051-b805-2b015cbd3ab6-config\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437560 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437606 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-config\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437652 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.437805 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-audit-dir\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438115 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc955325-c8fa-4454-ab18-2d7ea44f7da4-config\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438158 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc955325-c8fa-4454-ab18-2d7ea44f7da4-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438313 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcp4k\" (UniqueName: \"kubernetes.io/projected/cc955325-c8fa-4454-ab18-2d7ea44f7da4-kube-api-access-tcp4k\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438384 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-auth-proxy-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438432 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-trusted-ca-bundle\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438472 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mdsm\" (UniqueName: \"kubernetes.io/projected/11b4e369-201a-410c-a66c-9612fc9fafa8-kube-api-access-4mdsm\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.442890 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.438673 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgg79\" (UniqueName: \"kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.447421 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-client\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.447543 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-serving-ca\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.447647 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-srv-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.447767 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/287553f1-f80f-47bb-8a01-1930cd0e5d2c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.447893 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.448072 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.448099 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.448136 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-image-import-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450283 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450524 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4p94\" (UniqueName: \"kubernetes.io/projected/b99b91a3-bde7-4051-b805-2b015cbd3ab6-kube-api-access-f4p94\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450646 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450752 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450887 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.450989 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451073 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnfd\" (UniqueName: \"kubernetes.io/projected/3df86cb4-acbc-40de-9991-9ba4cc6d0397-kube-api-access-7jnfd\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451197 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc955325-c8fa-4454-ab18-2d7ea44f7da4-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451308 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df86cb4-acbc-40de-9991-9ba4cc6d0397-serving-cert\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451412 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451431 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451578 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7rm\" (UniqueName: \"kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451659 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451685 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee64e319-d2fd-4a23-808e-a4ab684a16af-serving-cert\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451680 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451708 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-images\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451915 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-serving-cert\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451951 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6x4\" (UniqueName: \"kubernetes.io/projected/282f137b-885c-4e38-ac24-c35a21457457-kube-api-access-cx6x4\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.451975 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-encryption-config\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452070 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-dir\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452152 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452215 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452247 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7bn\" (UniqueName: \"kubernetes.io/projected/9ab60787-a0f6-4772-96ae-8278cdada627-kube-api-access-wr7bn\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452278 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ab60787-a0f6-4772-96ae-8278cdada627-tmpfs\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452308 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-policies\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452375 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11b4e369-201a-410c-a66c-9612fc9fafa8-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452410 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452501 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-client\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452592 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-encryption-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452668 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99b91a3-bde7-4051-b805-2b015cbd3ab6-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452699 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-serving-cert\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452745 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kzm\" (UniqueName: \"kubernetes.io/projected/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-kube-api-access-j5kzm\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452778 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-config\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452827 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwjcj\" (UniqueName: \"kubernetes.io/projected/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-kube-api-access-cwjcj\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.452856 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/282f137b-885c-4e38-ac24-c35a21457457-machine-approver-tls\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.456714 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.456872 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.460980 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.461166 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.463541 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.468846 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.469052 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.474296 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.474671 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.477344 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.477563 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.480191 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.480330 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.483467 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.484384 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-ddw2j"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.484564 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.488214 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.488401 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.492253 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.492344 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.495229 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-6r5xz"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.495521 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.499767 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.499999 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.505395 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.505565 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.510495 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.510869 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.514978 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-lqqhp"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.515814 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.521962 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-48lzl"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.538153 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.538362 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.543430 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.547788 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2ttk"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.547957 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553116 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553390 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553636 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-trusted-ca-bundle\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553673 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lgg79\" (UniqueName: \"kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553695 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-client\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553712 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-serving-ca\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553762 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-apiservice-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553779 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlnm\" (UniqueName: \"kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553797 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553815 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-client\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553830 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7vsh\" (UniqueName: \"kubernetes.io/projected/ac0d4bff-1835-45f9-bca5-e84de2f1c705-kube-api-access-h7vsh\") pod \"migrator-866fcbc849-vgfbc\" (UID: \"ac0d4bff-1835-45f9-bca5-e84de2f1c705\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.553851 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.554689 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-image-import-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.554768 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.554771 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4r4x\" (UniqueName: \"kubernetes.io/projected/63bfe3eb-44bd-45db-8327-52468bb9ca12-kube-api-access-l4r4x\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.554836 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6f5\" (UniqueName: \"kubernetes.io/projected/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-kube-api-access-8g6f5\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.554950 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684w2\" (UniqueName: \"kubernetes.io/projected/c13f0ecd-bdc7-4f94-9013-3277f1b20451-kube-api-access-684w2\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555118 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-trusted-ca-bundle\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555186 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc660b0c-3432-4bfa-8349-0f7ac08afce8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555279 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555319 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555349 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-webhook-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555450 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555477 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7jnfd\" (UniqueName: \"kubernetes.io/projected/3df86cb4-acbc-40de-9991-9ba4cc6d0397-kube-api-access-7jnfd\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555496 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-profile-collector-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555514 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555540 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555559 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wlkq\" (UniqueName: \"kubernetes.io/projected/2b370d45-15f6-4f78-90d8-f15bb7f31949-kube-api-access-6wlkq\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555612 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.555635 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c13f0ecd-bdc7-4f94-9013-3277f1b20451-serving-cert\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557034 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7rm\" (UniqueName: \"kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557072 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-image-import-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557117 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557128 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557229 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-images\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557262 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557287 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7bn\" (UniqueName: \"kubernetes.io/projected/9ab60787-a0f6-4772-96ae-8278cdada627-kube-api-access-wr7bn\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557320 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588sd\" (UniqueName: \"kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557346 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc660b0c-3432-4bfa-8349-0f7ac08afce8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557375 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11b4e369-201a-410c-a66c-9612fc9fafa8-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557400 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557405 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-client\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557480 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwwd\" (UniqueName: \"kubernetes.io/projected/039025e0-e2cb-479d-b87a-9966fa3d96f2-kube-api-access-fpwwd\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557632 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557877 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-config\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557946 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/282f137b-885c-4e38-ac24-c35a21457457-machine-approver-tls\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.557980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2dde6863-5960-4b1b-b694-be1862901fb0-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558002 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-audit\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558025 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wr5h\" (UniqueName: \"kubernetes.io/projected/0494dafa-d272-45bf-a11e-7ca78f92223d-kube-api-access-6wr5h\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558047 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b4e369-201a-410c-a66c-9612fc9fafa8-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558083 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558109 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13f0ecd-bdc7-4f94-9013-3277f1b20451-config\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558141 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558166 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558195 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558222 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-key\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558251 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svmm4\" (UniqueName: \"kubernetes.io/projected/287553f1-f80f-47bb-8a01-1930cd0e5d2c-kube-api-access-svmm4\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558274 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558293 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99b91a3-bde7-4051-b805-2b015cbd3ab6-config\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558313 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/419c7428-8eea-4a26-8329-f359a77e5c80-tmp-dir\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558335 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558377 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558399 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-audit-dir\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558455 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc955325-c8fa-4454-ab18-2d7ea44f7da4-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558488 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-srv-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558523 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-config\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558551 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcp4k\" (UniqueName: \"kubernetes.io/projected/cc955325-c8fa-4454-ab18-2d7ea44f7da4-kube-api-access-tcp4k\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558580 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mdsm\" (UniqueName: \"kubernetes.io/projected/11b4e369-201a-410c-a66c-9612fc9fafa8-kube-api-access-4mdsm\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558609 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbhn\" (UniqueName: \"kubernetes.io/projected/da3a0959-1a85-473a-95d5-51b77e30c5da-kube-api-access-9gbhn\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2dde6863-5960-4b1b-b694-be1862901fb0-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558665 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-auth-proxy-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558691 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/287553f1-f80f-47bb-8a01-1930cd0e5d2c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558711 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558729 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-serving-cert\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558787 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtklx\" (UniqueName: \"kubernetes.io/projected/8f5bf46f-e39c-4fa5-9ec3-24912f616295-kube-api-access-vtklx\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558812 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558836 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558860 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-config\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558888 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558918 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4p94\" (UniqueName: \"kubernetes.io/projected/b99b91a3-bde7-4051-b805-2b015cbd3ab6-kube-api-access-f4p94\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558937 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm4mk\" (UniqueName: \"kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558954 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/039025e0-e2cb-479d-b87a-9966fa3d96f2-tmpfs\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.558975 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559001 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559020 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559020 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-config\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559042 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559148 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559196 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc955325-c8fa-4454-ab18-2d7ea44f7da4-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559222 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df86cb4-acbc-40de-9991-9ba4cc6d0397-serving-cert\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559251 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qjj\" (UniqueName: \"kubernetes.io/projected/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-kube-api-access-b9qjj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559304 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559346 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6qk\" (UniqueName: \"kubernetes.io/projected/2dde6863-5960-4b1b-b694-be1862901fb0-kube-api-access-kj6qk\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559435 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559488 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559498 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-serving-ca\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee64e319-d2fd-4a23-808e-a4ab684a16af-serving-cert\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559560 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-serving-cert\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559705 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-dir\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559780 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559812 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6x4\" (UniqueName: \"kubernetes.io/projected/282f137b-885c-4e38-ac24-c35a21457457-kube-api-access-cx6x4\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559837 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-encryption-config\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559863 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc660b0c-3432-4bfa-8349-0f7ac08afce8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559970 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.559998 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-cabundle\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560029 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc660b0c-3432-4bfa-8349-0f7ac08afce8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560056 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ab60787-a0f6-4772-96ae-8278cdada627-tmpfs\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560083 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560106 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pzq8\" (UniqueName: \"kubernetes.io/projected/419c7428-8eea-4a26-8329-f359a77e5c80-kube-api-access-9pzq8\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560119 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b4e369-201a-410c-a66c-9612fc9fafa8-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560134 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-policies\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560269 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.560431 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-images\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.561127 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-dir\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.561816 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3df86cb4-acbc-40de-9991-9ba4cc6d0397-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.562375 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282f137b-885c-4e38-ac24-c35a21457457-auth-proxy-config\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.563345 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.564031 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.565332 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-encryption-config\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.565335 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11b4e369-201a-410c-a66c-9612fc9fafa8-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.565452 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-etcd-client\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.565863 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ab60787-a0f6-4772-96ae-8278cdada627-tmpfs\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.566249 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-audit-policies\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.566624 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-audit\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.566728 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-client\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.567756 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.567924 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.568205 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.568306 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99b91a3-bde7-4051-b805-2b015cbd3ab6-config\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.568682 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0494dafa-d272-45bf-a11e-7ca78f92223d-audit-dir\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.569310 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da3a0959-1a85-473a-95d5-51b77e30c5da-webhook-certs\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.569356 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.569429 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-encryption-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.569932 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc955325-c8fa-4454-ab18-2d7ea44f7da4-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570038 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570182 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-config\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570306 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99b91a3-bde7-4051-b805-2b015cbd3ab6-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570492 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b370d45-15f6-4f78-90d8-f15bb7f31949-tmpfs\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570555 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-serving-cert\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570593 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5kzm\" (UniqueName: \"kubernetes.io/projected/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-kube-api-access-j5kzm\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570613 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-srv-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.570708 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571196 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwjcj\" (UniqueName: \"kubernetes.io/projected/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-kube-api-access-cwjcj\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571249 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571292 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571416 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ee64e319-d2fd-4a23-808e-a4ab684a16af-available-featuregates\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571474 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgtx\" (UniqueName: \"kubernetes.io/projected/ee64e319-d2fd-4a23-808e-a4ab684a16af-kube-api-access-cdgtx\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571665 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571974 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ee64e319-d2fd-4a23-808e-a4ab684a16af-available-featuregates\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.571944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5bf46f-e39c-4fa5-9ec3-24912f616295-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572027 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572176 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572227 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc955325-c8fa-4454-ab18-2d7ea44f7da4-config\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572323 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-images\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572865 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc955325-c8fa-4454-ab18-2d7ea44f7da4-config\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572881 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.572328 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0494dafa-d272-45bf-a11e-7ca78f92223d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.573190 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.573492 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.573603 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/287553f1-f80f-47bb-8a01-1930cd0e5d2c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.574204 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-serving-cert\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.574239 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-encryption-config\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.574282 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee64e319-d2fd-4a23-808e-a4ab684a16af-serving-cert\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.575791 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.576351 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.576503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df86cb4-acbc-40de-9991-9ba4cc6d0397-serving-cert\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.581825 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0494dafa-d272-45bf-a11e-7ca78f92223d-serving-cert\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.582023 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/282f137b-885c-4e38-ac24-c35a21457457-machine-approver-tls\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.582581 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99b91a3-bde7-4051-b805-2b015cbd3ab6-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.582762 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ab60787-a0f6-4772-96ae-8278cdada627-srv-cert\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.582828 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc955325-c8fa-4454-ab18-2d7ea44f7da4-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.583029 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.584817 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.584931 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.584990 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-zlpql"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.586751 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.595722 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.595959 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.596119 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-btpkr"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.596213 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.596273 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.596339 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lkws9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.604106 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.607050 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dgxpb"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.607312 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lkws9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.612902 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613132 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613234 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lhgtf"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613309 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613380 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613465 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613539 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613624 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-84k5t"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613709 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613827 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-6r5xz"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613909 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.613985 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6qnjf"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.614058 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-ddw2j"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.614129 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.614199 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r6vwz"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.614636 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.620082 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zjwsj"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.620304 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624115 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624160 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624174 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624311 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-477z9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624342 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624358 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624372 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-48lzl"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624385 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624397 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r6vwz"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624410 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624421 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624432 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624441 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624451 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624460 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lkws9"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624469 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624479 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xfrc5"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624487 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-zlpql"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624392 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624495 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2ttk"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.624617 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-spkz4"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.625577 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.628093 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spkz4"] Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.628248 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.643915 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.671518 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673572 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pzq8\" (UniqueName: \"kubernetes.io/projected/419c7428-8eea-4a26-8329-f359a77e5c80-kube-api-access-9pzq8\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673611 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673635 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673667 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da3a0959-1a85-473a-95d5-51b77e30c5da-webhook-certs\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673691 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.673888 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b370d45-15f6-4f78-90d8-f15bb7f31949-tmpfs\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674034 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-srv-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674137 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674208 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674259 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674294 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjlf\" (UniqueName: \"kubernetes.io/projected/ad44bdbe-5009-4b21-ad83-21185ec2d86d-kube-api-access-qcjlf\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674504 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5bf46f-e39c-4fa5-9ec3-24912f616295-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674773 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-images\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674813 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-apiservice-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674864 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlnm\" (UniqueName: \"kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674896 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674954 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-client\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.674982 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7vsh\" (UniqueName: \"kubernetes.io/projected/ac0d4bff-1835-45f9-bca5-e84de2f1c705-kube-api-access-h7vsh\") pod \"migrator-866fcbc849-vgfbc\" (UID: \"ac0d4bff-1835-45f9-bca5-e84de2f1c705\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675034 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b370d45-15f6-4f78-90d8-f15bb7f31949-tmpfs\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675056 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4r4x\" (UniqueName: \"kubernetes.io/projected/63bfe3eb-44bd-45db-8327-52468bb9ca12-kube-api-access-l4r4x\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675117 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8g6f5\" (UniqueName: \"kubernetes.io/projected/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-kube-api-access-8g6f5\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-684w2\" (UniqueName: \"kubernetes.io/projected/c13f0ecd-bdc7-4f94-9013-3277f1b20451-kube-api-access-684w2\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675221 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc660b0c-3432-4bfa-8349-0f7ac08afce8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675278 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675311 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675366 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-webhook-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675429 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-profile-collector-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675460 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675477 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-images\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675525 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675556 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wlkq\" (UniqueName: \"kubernetes.io/projected/2b370d45-15f6-4f78-90d8-f15bb7f31949-kube-api-access-6wlkq\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675615 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675645 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c13f0ecd-bdc7-4f94-9013-3277f1b20451-serving-cert\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675704 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09b91830-9a07-4d48-9435-c5f7e9c2a402-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.675956 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676007 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-tmp\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676040 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-588sd\" (UniqueName: \"kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676061 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc660b0c-3432-4bfa-8349-0f7ac08afce8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676094 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2925eaf0-c587-4c7f-a246-5b64c7103637-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676137 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fpwwd\" (UniqueName: \"kubernetes.io/projected/039025e0-e2cb-479d-b87a-9966fa3d96f2-kube-api-access-fpwwd\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676170 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-metrics-certs\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676218 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2dde6863-5960-4b1b-b694-be1862901fb0-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676250 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676276 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13f0ecd-bdc7-4f94-9013-3277f1b20451-config\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676299 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676325 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2g4\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-kube-api-access-lb2g4\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676353 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-key\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676379 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-default-certificate\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.684909 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/419c7428-8eea-4a26-8329-f359a77e5c80-tmp-dir\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676830 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.677169 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da3a0959-1a85-473a-95d5-51b77e30c5da-webhook-certs\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.677425 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.677542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2dde6863-5960-4b1b-b694-be1862901fb0-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.677689 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc660b0c-3432-4bfa-8349-0f7ac08afce8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.678465 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.679314 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-srv-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.679807 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/039025e0-e2cb-479d-b87a-9966fa3d96f2-profile-collector-cert\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.680662 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.681162 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.682932 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.683365 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-client\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.683533 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.684544 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.684915 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.676685 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.685217 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686022 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686301 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686352 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbhn\" (UniqueName: \"kubernetes.io/projected/da3a0959-1a85-473a-95d5-51b77e30c5da-kube-api-access-9gbhn\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2dde6863-5960-4b1b-b694-be1862901fb0-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686412 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09b91830-9a07-4d48-9435-c5f7e9c2a402-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686446 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-serving-cert\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686472 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-stats-auth\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686509 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtklx\" (UniqueName: \"kubernetes.io/projected/8f5bf46f-e39c-4fa5-9ec3-24912f616295-kube-api-access-vtklx\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686531 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686549 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686576 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-config\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686599 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xm4mk\" (UniqueName: \"kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686657 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/039025e0-e2cb-479d-b87a-9966fa3d96f2-tmpfs\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686714 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686769 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686812 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.686854 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687319 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687481 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2925eaf0-c587-4c7f-a246-5b64c7103637-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687642 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687749 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-config\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687838 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.687899 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9qjj\" (UniqueName: \"kubernetes.io/projected/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-kube-api-access-b9qjj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.688583 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.688661 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/039025e0-e2cb-479d-b87a-9966fa3d96f2-tmpfs\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.688833 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/419c7428-8eea-4a26-8329-f359a77e5c80-tmp-dir\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689021 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6qk\" (UniqueName: \"kubernetes.io/projected/2dde6863-5960-4b1b-b694-be1862901fb0-kube-api-access-kj6qk\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689167 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc660b0c-3432-4bfa-8349-0f7ac08afce8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689193 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689216 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-cabundle\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689238 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc660b0c-3432-4bfa-8349-0f7ac08afce8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.689441 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.690971 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.691415 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.692438 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.693261 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.703805 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.723907 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.735271 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.744727 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.764651 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.770723 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/419c7428-8eea-4a26-8329-f359a77e5c80-serving-cert\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.784168 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.786381 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/419c7428-8eea-4a26-8329-f359a77e5c80-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791178 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791277 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791304 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjlf\" (UniqueName: \"kubernetes.io/projected/ad44bdbe-5009-4b21-ad83-21185ec2d86d-kube-api-access-qcjlf\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791399 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09b91830-9a07-4d48-9435-c5f7e9c2a402-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791443 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-tmp\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791471 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2925eaf0-c587-4c7f-a246-5b64c7103637-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791498 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-metrics-certs\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791537 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2g4\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-kube-api-access-lb2g4\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791659 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-default-certificate\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791726 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09b91830-9a07-4d48-9435-c5f7e9c2a402-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791826 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-stats-auth\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791869 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791903 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2925eaf0-c587-4c7f-a246-5b64c7103637-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.791931 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.792000 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-tmp\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.792346 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/09b91830-9a07-4d48-9435-c5f7e9c2a402-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.792395 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2925eaf0-c587-4c7f-a246-5b64c7103637-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.804378 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.808574 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5bf46f-e39c-4fa5-9ec3-24912f616295-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.824521 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.828840 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-apiservice-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.830632 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b370d45-15f6-4f78-90d8-f15bb7f31949-webhook-cert\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.843443 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.863782 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.883752 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.892097 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c13f0ecd-bdc7-4f94-9013-3277f1b20451-serving-cert\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.903862 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.908781 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13f0ecd-bdc7-4f94-9013-3277f1b20451-config\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.913332 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.913955 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.914371 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.924643 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.943846 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.964162 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.976124 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc660b0c-3432-4bfa-8349-0f7ac08afce8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:58 crc kubenswrapper[5099]: I0121 18:15:58.982982 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.003967 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.011585 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc660b0c-3432-4bfa-8349-0f7ac08afce8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.024255 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.051205 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.052816 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.064363 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.071384 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.083554 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.103499 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.124473 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.131164 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-cabundle\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.146242 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.164860 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.183565 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.203536 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.211683 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/63bfe3eb-44bd-45db-8327-52468bb9ca12-signing-key\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.223716 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.244891 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.251448 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2dde6863-5960-4b1b-b694-be1862901fb0-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.263729 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.289484 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.295352 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.303256 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.323165 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.343672 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.364229 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.383932 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.404766 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.423133 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.444058 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.474934 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.510492 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.513003 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09b91830-9a07-4d48-9435-c5f7e9c2a402-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.521484 5099 request.go:752] "Waited before sending request" delay="1.015067237s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.522822 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.536559 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/09b91830-9a07-4d48-9435-c5f7e9c2a402-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.550175 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.563177 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.584559 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.604332 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.623456 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.644883 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.663535 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.683186 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.703931 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.717206 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-default-certificate\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.723299 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.743570 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.756915 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-metrics-certs\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.766471 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.776897 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad44bdbe-5009-4b21-ad83-21185ec2d86d-stats-auth\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.783342 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792254 5099 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792284 5099 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792259 5099 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792379 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config podName:2925eaf0-c587-4c7f-a246-5b64c7103637 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:00.292352061 +0000 UTC m=+117.706314522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config") pod "kube-controller-manager-operator-69d5f845f8-rjbt9" (UID: "2925eaf0-c587-4c7f-a246-5b64c7103637") : failed to sync configmap cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792398 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle podName:ad44bdbe-5009-4b21-ad83-21185ec2d86d nodeName:}" failed. No retries permitted until 2026-01-21 18:16:00.292390892 +0000 UTC m=+117.706353353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle") pod "router-default-68cf44c8b8-lqqhp" (UID: "ad44bdbe-5009-4b21-ad83-21185ec2d86d") : failed to sync configmap cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: E0121 18:15:59.792434 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert podName:2925eaf0-c587-4c7f-a246-5b64c7103637 nodeName:}" failed. No retries permitted until 2026-01-21 18:16:00.292411112 +0000 UTC m=+117.706373593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-rjbt9" (UID: "2925eaf0-c587-4c7f-a246-5b64c7103637") : failed to sync secret cache: timed out waiting for the condition Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.803130 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.823073 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.843543 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.864884 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.889596 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.903824 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.915687 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.915699 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.919751 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.923391 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.935553 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e"} Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.936044 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.943672 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.963664 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 18:15:59 crc kubenswrapper[5099]: I0121 18:15:59.983839 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.003718 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.024565 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.081314 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgg79\" (UniqueName: \"kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79\") pod \"controller-manager-65b6cccf98-5pwm7\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.084401 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jnfd\" (UniqueName: \"kubernetes.io/projected/3df86cb4-acbc-40de-9991-9ba4cc6d0397-kube-api-access-7jnfd\") pod \"authentication-operator-7f5c659b84-kbdp8\" (UID: \"3df86cb4-acbc-40de-9991-9ba4cc6d0397\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.101423 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7rm\" (UniqueName: \"kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm\") pod \"route-controller-manager-776cdc94d6-5p85q\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.121642 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7bn\" (UniqueName: \"kubernetes.io/projected/9ab60787-a0f6-4772-96ae-8278cdada627-kube-api-access-wr7bn\") pod \"catalog-operator-75ff9f647d-nx9kc\" (UID: \"9ab60787-a0f6-4772-96ae-8278cdada627\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.141728 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wr5h\" (UniqueName: \"kubernetes.io/projected/0494dafa-d272-45bf-a11e-7ca78f92223d-kube-api-access-6wr5h\") pod \"apiserver-9ddfb9f55-84k5t\" (UID: \"0494dafa-d272-45bf-a11e-7ca78f92223d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.150758 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.161223 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.162312 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mdsm\" (UniqueName: \"kubernetes.io/projected/11b4e369-201a-410c-a66c-9612fc9fafa8-kube-api-access-4mdsm\") pod \"kube-storage-version-migrator-operator-565b79b866-4fr9d\" (UID: \"11b4e369-201a-410c-a66c-9612fc9fafa8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.178015 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6x4\" (UniqueName: \"kubernetes.io/projected/282f137b-885c-4e38-ac24-c35a21457457-kube-api-access-cx6x4\") pod \"machine-approver-54c688565-d55cs\" (UID: \"282f137b-885c-4e38-ac24-c35a21457457\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.201382 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svmm4\" (UniqueName: \"kubernetes.io/projected/287553f1-f80f-47bb-8a01-1930cd0e5d2c-kube-api-access-svmm4\") pod \"cluster-samples-operator-6b564684c8-c6bc8\" (UID: \"287553f1-f80f-47bb-8a01-1930cd0e5d2c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.205053 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.219122 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.221995 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcp4k\" (UniqueName: \"kubernetes.io/projected/cc955325-c8fa-4454-ab18-2d7ea44f7da4-kube-api-access-tcp4k\") pod \"openshift-controller-manager-operator-686468bdd5-vs6xv\" (UID: \"cc955325-c8fa-4454-ab18-2d7ea44f7da4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:16:00 crc kubenswrapper[5099]: W0121 18:16:00.244415 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod282f137b_885c_4e38_ac24_c35a21457457.slice/crio-d053184b986fe8c456be8bd1c468ee0b9187d7897a0572dea36d660fa1b59af9 WatchSource:0}: Error finding container d053184b986fe8c456be8bd1c468ee0b9187d7897a0572dea36d660fa1b59af9: Status 404 returned error can't find the container with id d053184b986fe8c456be8bd1c468ee0b9187d7897a0572dea36d660fa1b59af9 Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.250305 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.250386 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4p94\" (UniqueName: \"kubernetes.io/projected/b99b91a3-bde7-4051-b805-2b015cbd3ab6-kube-api-access-f4p94\") pod \"openshift-apiserver-operator-846cbfc458-nb9km\" (UID: \"b99b91a3-bde7-4051-b805-2b015cbd3ab6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.262766 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.263638 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5kzm\" (UniqueName: \"kubernetes.io/projected/0ffb7e64-0677-44ec-971d-fda3f9b87e2d-kube-api-access-j5kzm\") pod \"apiserver-8596bd845d-btpkr\" (UID: \"0ffb7e64-0677-44ec-971d-fda3f9b87e2d\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.273372 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.280631 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.304214 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.309581 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgtx\" (UniqueName: \"kubernetes.io/projected/ee64e319-d2fd-4a23-808e-a4ab684a16af-kube-api-access-cdgtx\") pod \"openshift-config-operator-5777786469-lhgtf\" (UID: \"ee64e319-d2fd-4a23-808e-a4ab684a16af\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.310886 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwjcj\" (UniqueName: \"kubernetes.io/projected/0e5a1f9f-a6df-4d87-bc2d-509d2632fb32-kube-api-access-cwjcj\") pod \"machine-api-operator-755bb95488-477z9\" (UID: \"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32\") " pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.326194 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.326938 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.327055 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.327132 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.328294 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad44bdbe-5009-4b21-ad83-21185ec2d86d-service-ca-bundle\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.333353 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.344445 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925eaf0-c587-4c7f-a246-5b64c7103637-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.345799 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.353447 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2925eaf0-c587-4c7f-a246-5b64c7103637-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.364383 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.384153 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.411270 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.426150 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.444224 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.466502 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.479963 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.515380 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.523352 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.535141 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.548633 5099 request.go:752] "Waited before sending request" delay="1.940923227s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.549378 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.570140 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.570862 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.586138 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.591297 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.604804 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.624658 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.674777 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.684724 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.685175 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.694711 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.714925 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.724068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.744161 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.763377 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.785024 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.845568 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pzq8\" (UniqueName: \"kubernetes.io/projected/419c7428-8eea-4a26-8329-f359a77e5c80-kube-api-access-9pzq8\") pod \"etcd-operator-69b85846b6-vr7cg\" (UID: \"419c7428-8eea-4a26-8329-f359a77e5c80\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.854257 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-84k5t"] Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.860133 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7vsh\" (UniqueName: \"kubernetes.io/projected/ac0d4bff-1835-45f9-bca5-e84de2f1c705-kube-api-access-h7vsh\") pod \"migrator-866fcbc849-vgfbc\" (UID: \"ac0d4bff-1835-45f9-bca5-e84de2f1c705\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.871602 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8"] Jan 21 18:16:00 crc kubenswrapper[5099]: W0121 18:16:00.873951 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0494dafa_d272_45bf_a11e_7ca78f92223d.slice/crio-4486f3cde6d3ff945c36f1f9655e7b43ee2613d3c489c37662cd294be0083427 WatchSource:0}: Error finding container 4486f3cde6d3ff945c36f1f9655e7b43ee2613d3c489c37662cd294be0083427: Status 404 returned error can't find the container with id 4486f3cde6d3ff945c36f1f9655e7b43ee2613d3c489c37662cd294be0083427 Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.881659 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g6f5\" (UniqueName: \"kubernetes.io/projected/bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0-kube-api-access-8g6f5\") pod \"machine-config-operator-67c9d58cbb-fwthw\" (UID: \"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.893706 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlnm\" (UniqueName: \"kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm\") pod \"collect-profiles-29483655-48djt\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.932314 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-684w2\" (UniqueName: \"kubernetes.io/projected/c13f0ecd-bdc7-4f94-9013-3277f1b20451-kube-api-access-684w2\") pod \"service-ca-operator-5b9c976747-hkwx9\" (UID: \"c13f0ecd-bdc7-4f94-9013-3277f1b20451\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.953698 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" Jan 21 18:16:00 crc kubenswrapper[5099]: I0121 18:16:00.964038 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" event={"ID":"3df86cb4-acbc-40de-9991-9ba4cc6d0397","Type":"ContainerStarted","Data":"2179bb283af7c48a39d59aa2f18940d46249286471905b9a16c5be9f8314cf39"} Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.033439 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.046990 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" event={"ID":"282f137b-885c-4e38-ac24-c35a21457457","Type":"ContainerStarted","Data":"d053184b986fe8c456be8bd1c468ee0b9187d7897a0572dea36d660fa1b59af9"} Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.047843 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.048271 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" event={"ID":"0494dafa-d272-45bf-a11e-7ca78f92223d","Type":"ContainerStarted","Data":"4486f3cde6d3ff945c36f1f9655e7b43ee2613d3c489c37662cd294be0083427"} Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.051888 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" event={"ID":"85ddc24f-5591-4300-9269-cbc659dc7b4f","Type":"ContainerStarted","Data":"b0e22df1402c9c6886021823db05b674c1cdfd8195a2b331f18dacd5c80ee76f"} Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.117957 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.150302 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm4mk\" (UniqueName: \"kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk\") pod \"oauth-openshift-66458b6674-6qnjf\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.153136 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.186361 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.218340 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.221620 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-588sd\" (UniqueName: \"kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd\") pod \"marketplace-operator-547dbd544d-lxg2b\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.227268 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.226901 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8"] Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.241357 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.246522 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.247056 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtklx\" (UniqueName: \"kubernetes.io/projected/8f5bf46f-e39c-4fa5-9ec3-24912f616295-kube-api-access-vtklx\") pod \"package-server-manager-77f986bd66-p9ggs\" (UID: \"8f5bf46f-e39c-4fa5-9ec3-24912f616295\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.247747 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4r4x\" (UniqueName: \"kubernetes.io/projected/63bfe3eb-44bd-45db-8327-52468bb9ca12-kube-api-access-l4r4x\") pod \"service-ca-74545575db-ddw2j\" (UID: \"63bfe3eb-44bd-45db-8327-52468bb9ca12\") " pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.248912 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpwwd\" (UniqueName: \"kubernetes.io/projected/039025e0-e2cb-479d-b87a-9966fa3d96f2-kube-api-access-fpwwd\") pod \"olm-operator-5cdf44d969-4j82v\" (UID: \"039025e0-e2cb-479d-b87a-9966fa3d96f2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.273633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wlkq\" (UniqueName: \"kubernetes.io/projected/2b370d45-15f6-4f78-90d8-f15bb7f31949-kube-api-access-6wlkq\") pod \"packageserver-7d4fc7d867-xr84c\" (UID: \"2b370d45-15f6-4f78-90d8-f15bb7f31949\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.274585 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjlf\" (UniqueName: \"kubernetes.io/projected/ad44bdbe-5009-4b21-ad83-21185ec2d86d-kube-api-access-qcjlf\") pod \"router-default-68cf44c8b8-lqqhp\" (UID: \"ad44bdbe-5009-4b21-ad83-21185ec2d86d\") " pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.283763 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.285626 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc660b0c-3432-4bfa-8349-0f7ac08afce8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-62qpd\" (UID: \"cc660b0c-3432-4bfa-8349-0f7ac08afce8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.285856 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9qjj\" (UniqueName: \"kubernetes.io/projected/178950b5-b1b9-4d7d-90b1-ba4fb79fd10d-kube-api-access-b9qjj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-xhj5t\" (UID: \"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.288205 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbhn\" (UniqueName: \"kubernetes.io/projected/da3a0959-1a85-473a-95d5-51b77e30c5da-kube-api-access-9gbhn\") pod \"multus-admission-controller-69db94689b-xfrc5\" (UID: \"da3a0959-1a85-473a-95d5-51b77e30c5da\") " pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.303406 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.308140 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.313087 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6qk\" (UniqueName: \"kubernetes.io/projected/2dde6863-5960-4b1b-b694-be1862901fb0-kube-api-access-kj6qk\") pod \"machine-config-controller-f9cdd68f7-q9l8j\" (UID: \"2dde6863-5960-4b1b-b694-be1862901fb0\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.316178 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.316840 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2925eaf0-c587-4c7f-a246-5b64c7103637-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjbt9\" (UID: \"2925eaf0-c587-4c7f-a246-5b64c7103637\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.321505 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2g4\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-kube-api-access-lb2g4\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.342933 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.346166 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09b91830-9a07-4d48-9435-c5f7e9c2a402-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-6ww7n\" (UID: \"09b91830-9a07-4d48-9435-c5f7e9c2a402\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.355985 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356035 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4ca0e8-efd8-493c-8784-2e28266561eb-kube-api-access\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356058 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356081 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcm4\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-kube-api-access-vpcm4\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356270 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356305 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356393 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-console-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356414 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqmq\" (UniqueName: \"kubernetes.io/projected/cf468a9b-3840-46c0-8390-79ec278be1d0-kube-api-access-qjqmq\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356474 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-service-ca\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356535 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08a4565-4bb3-44b9-90e0-1b841c3127ea-tmp-dir\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356555 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-trusted-ca\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356582 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8sm\" (UniqueName: \"kubernetes.io/projected/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-kube-api-access-zf8sm\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356607 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-metrics-tls\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356627 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356645 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8855b7e4-a1e8-41ae-b995-832120b0bdcd-serving-cert\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356750 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356777 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356815 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-tmp-dir\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356851 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-trusted-ca-bundle\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356896 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356921 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.356944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357000 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357041 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6s8\" (UniqueName: \"kubernetes.io/projected/bee88171-b2f0-49bb-92aa-8a0d79d87cb7-kube-api-access-tt6s8\") pod \"downloads-747b44746d-zlpql\" (UID: \"bee88171-b2f0-49bb-92aa-8a0d79d87cb7\") " pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357064 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f08a4565-4bb3-44b9-90e0-1b841c3127ea-metrics-tls\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357088 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhdks\" (UniqueName: \"kubernetes.io/projected/f08a4565-4bb3-44b9-90e0-1b841c3127ea-kube-api-access-bhdks\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357113 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2r7t\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357133 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2t6g\" (UniqueName: \"kubernetes.io/projected/8855b7e4-a1e8-41ae-b995-832120b0bdcd-kube-api-access-n2t6g\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357179 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb4ca0e8-efd8-493c-8784-2e28266561eb-tmp-dir\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357222 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4ca0e8-efd8-493c-8784-2e28266561eb-config\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357271 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-oauth-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357295 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-config-volume\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357337 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-oauth-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357375 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-config\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.357398 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb4ca0e8-efd8-493c-8784-2e28266561eb-serving-cert\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.359067 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:01.859047449 +0000 UTC m=+119.273009910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.430567 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.435866 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.450841 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.450893 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-ddw2j" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465189 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.465649 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:01.965622698 +0000 UTC m=+119.379585159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465699 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnnk\" (UniqueName: \"kubernetes.io/projected/d0d4f813-c328-441b-963b-5241f73f9da2-kube-api-access-4mnnk\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465756 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmnx\" (UniqueName: \"kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465824 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-plugins-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465888 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465907 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-csi-data-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.465956 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466031 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-certs\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466064 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8454077-b0c4-4cdc-80e0-a620312eec57-cert\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466103 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-console-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466122 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqmq\" (UniqueName: \"kubernetes.io/projected/cf468a9b-3840-46c0-8390-79ec278be1d0-kube-api-access-qjqmq\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466143 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-service-ca\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466215 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08a4565-4bb3-44b9-90e0-1b841c3127ea-tmp-dir\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466261 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-trusted-ca\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466301 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8sm\" (UniqueName: \"kubernetes.io/projected/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-kube-api-access-zf8sm\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466341 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-metrics-tls\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466365 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8855b7e4-a1e8-41ae-b995-832120b0bdcd-serving-cert\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466480 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.466518 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-socket-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.467286 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.467328 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-tmp-dir\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.467536 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-trusted-ca-bundle\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.467569 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-mountpoint-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.468032 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-service-ca\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.468542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-trusted-ca\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.469662 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.470035 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.470183 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-tmp-dir\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.470784 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08a4565-4bb3-44b9-90e0-1b841c3127ea-tmp-dir\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.470874 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471315 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-console-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471388 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471652 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-trusted-ca-bundle\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471781 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471939 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.471966 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.472078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6s8\" (UniqueName: \"kubernetes.io/projected/bee88171-b2f0-49bb-92aa-8a0d79d87cb7-kube-api-access-tt6s8\") pod \"downloads-747b44746d-zlpql\" (UID: \"bee88171-b2f0-49bb-92aa-8a0d79d87cb7\") " pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.472100 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f08a4565-4bb3-44b9-90e0-1b841c3127ea-metrics-tls\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.472119 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhdks\" (UniqueName: \"kubernetes.io/projected/f08a4565-4bb3-44b9-90e0-1b841c3127ea-kube-api-access-bhdks\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.472955 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.472981 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:01.972963687 +0000 UTC m=+119.386926148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473116 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2r7t\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473169 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2t6g\" (UniqueName: \"kubernetes.io/projected/8855b7e4-a1e8-41ae-b995-832120b0bdcd-kube-api-access-n2t6g\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473290 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb4ca0e8-efd8-493c-8784-2e28266561eb-tmp-dir\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473331 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4ca0e8-efd8-493c-8784-2e28266561eb-config\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473377 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-oauth-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473396 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-config-volume\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473401 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473417 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-oauth-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-config\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473458 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473478 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb4ca0e8-efd8-493c-8784-2e28266561eb-serving-cert\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473511 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473528 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4ca0e8-efd8-493c-8784-2e28266561eb-kube-api-access\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473545 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-registration-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473564 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473587 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473604 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcm4\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-kube-api-access-vpcm4\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473629 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-node-bootstrap-token\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473667 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pqb\" (UniqueName: \"kubernetes.io/projected/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-kube-api-access-44pqb\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.473684 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxxv\" (UniqueName: \"kubernetes.io/projected/a8454077-b0c4-4cdc-80e0-a620312eec57-kube-api-access-ghxxv\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.476598 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cb4ca0e8-efd8-493c-8784-2e28266561eb-tmp-dir\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.477337 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.477425 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4ca0e8-efd8-493c-8784-2e28266561eb-config\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.477767 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.478154 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8855b7e4-a1e8-41ae-b995-832120b0bdcd-config\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.478768 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf468a9b-3840-46c0-8390-79ec278be1d0-oauth-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.480287 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb4ca0e8-efd8-493c-8784-2e28266561eb-serving-cert\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.486604 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-config-volume\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.493994 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.499383 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-oauth-config\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.499630 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.501056 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf468a9b-3840-46c0-8390-79ec278be1d0-console-serving-cert\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.502042 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6s8\" (UniqueName: \"kubernetes.io/projected/bee88171-b2f0-49bb-92aa-8a0d79d87cb7-kube-api-access-tt6s8\") pod \"downloads-747b44746d-zlpql\" (UID: \"bee88171-b2f0-49bb-92aa-8a0d79d87cb7\") " pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.505291 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8855b7e4-a1e8-41ae-b995-832120b0bdcd-serving-cert\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.505839 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.508227 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-metrics-tls\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.509159 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f08a4565-4bb3-44b9-90e0-1b841c3127ea-metrics-tls\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.509962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqmq\" (UniqueName: \"kubernetes.io/projected/cf468a9b-3840-46c0-8390-79ec278be1d0-kube-api-access-qjqmq\") pod \"console-64d44f6ddf-6r5xz\" (UID: \"cf468a9b-3840-46c0-8390-79ec278be1d0\") " pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.512126 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhdks\" (UniqueName: \"kubernetes.io/projected/f08a4565-4bb3-44b9-90e0-1b841c3127ea-kube-api-access-bhdks\") pod \"dns-operator-799b87ffcd-z2ttk\" (UID: \"f08a4565-4bb3-44b9-90e0-1b841c3127ea\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.515076 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8sm\" (UniqueName: \"kubernetes.io/projected/2805b946-8ae5-4a2f-8ae0-e5fb058174e7-kube-api-access-zf8sm\") pod \"dns-default-lkws9\" (UID: \"2805b946-8ae5-4a2f-8ae0-e5fb058174e7\") " pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.515662 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.521212 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.523336 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2r7t\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.528130 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.528297 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.536472 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.552172 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2t6g\" (UniqueName: \"kubernetes.io/projected/8855b7e4-a1e8-41ae-b995-832120b0bdcd-kube-api-access-n2t6g\") pod \"console-operator-67c89758df-48lzl\" (UID: \"8855b7e4-a1e8-41ae-b995-832120b0bdcd\") " pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.559862 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.574391 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.574646 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.074609169 +0000 UTC m=+119.488571740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575175 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575228 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-registration-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575253 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575293 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-node-bootstrap-token\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575326 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44pqb\" (UniqueName: \"kubernetes.io/projected/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-kube-api-access-44pqb\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575358 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxxv\" (UniqueName: \"kubernetes.io/projected/a8454077-b0c4-4cdc-80e0-a620312eec57-kube-api-access-ghxxv\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnnk\" (UniqueName: \"kubernetes.io/projected/d0d4f813-c328-441b-963b-5241f73f9da2-kube-api-access-4mnnk\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575465 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glmnx\" (UniqueName: \"kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575504 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-plugins-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575607 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-csi-data-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575692 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-certs\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575712 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8454077-b0c4-4cdc-80e0-a620312eec57-cert\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575496 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575855 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-socket-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575839 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-registration-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.575940 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576004 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-socket-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576029 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-mountpoint-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576050 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-plugins-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576120 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-csi-data-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576142 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.576191 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: W0121 18:16:01.576252 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad44bdbe_5009_4b21_ad83_21185ec2d86d.slice/crio-aa4dbceb1bb2ad7c20bcbfefa5487199afd9f1833202c20f78c3dc9743e41127 WatchSource:0}: Error finding container aa4dbceb1bb2ad7c20bcbfefa5487199afd9f1833202c20f78c3dc9743e41127: Status 404 returned error can't find the container with id aa4dbceb1bb2ad7c20bcbfefa5487199afd9f1833202c20f78c3dc9743e41127 Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.576796 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.076776295 +0000 UTC m=+119.490738916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.577345 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0d4f813-c328-441b-963b-5241f73f9da2-mountpoint-dir\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.577890 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.582513 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4ca0e8-efd8-493c-8784-2e28266561eb-kube-api-access\") pod \"kube-apiserver-operator-575994946d-2zxgx\" (UID: \"cb4ca0e8-efd8-493c-8784-2e28266561eb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.583709 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-node-bootstrap-token\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.585527 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8454077-b0c4-4cdc-80e0-a620312eec57-cert\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.603508 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcm4\" (UniqueName: \"kubernetes.io/projected/d6a1f7e0-130a-4bf8-8602-8b1800b7de37-kube-api-access-vpcm4\") pod \"ingress-operator-6b9cb4dbcf-6zt7l\" (UID: \"d6a1f7e0-130a-4bf8-8602-8b1800b7de37\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.616000 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-certs\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.645555 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxxv\" (UniqueName: \"kubernetes.io/projected/a8454077-b0c4-4cdc-80e0-a620312eec57-kube-api-access-ghxxv\") pod \"ingress-canary-spkz4\" (UID: \"a8454077-b0c4-4cdc-80e0-a620312eec57\") " pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.661493 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44pqb\" (UniqueName: \"kubernetes.io/projected/26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7-kube-api-access-44pqb\") pod \"machine-config-server-dgxpb\" (UID: \"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7\") " pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.678685 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.679259 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.179232407 +0000 UTC m=+119.593194868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.684826 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmnx\" (UniqueName: \"kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx\") pod \"cni-sysctl-allowlist-ds-zjwsj\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.773709 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.776237 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnnk\" (UniqueName: \"kubernetes.io/projected/d0d4f813-c328-441b-963b-5241f73f9da2-kube-api-access-4mnnk\") pod \"csi-hostpathplugin-r6vwz\" (UID: \"d0d4f813-c328-441b-963b-5241f73f9da2\") " pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.783692 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.784225 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.284206974 +0000 UTC m=+119.698169435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.786187 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.798493 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.813068 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.820448 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.936359 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgxpb" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.964312 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.964505 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.964582 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spkz4" Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.965274 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.965579 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.465513978 +0000 UTC m=+119.879476439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:01 crc kubenswrapper[5099]: I0121 18:16:01.966184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:01 crc kubenswrapper[5099]: E0121 18:16:01.966863 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.466837043 +0000 UTC m=+119.880799504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.068216 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.068925 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.568894595 +0000 UTC m=+119.982857056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.177309 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.177904 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.677884506 +0000 UTC m=+120.091846967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.202114 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" event={"ID":"ad44bdbe-5009-4b21-ad83-21185ec2d86d","Type":"ContainerStarted","Data":"aa4dbceb1bb2ad7c20bcbfefa5487199afd9f1833202c20f78c3dc9743e41127"} Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.210657 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" event={"ID":"282f137b-885c-4e38-ac24-c35a21457457","Type":"ContainerStarted","Data":"030ad22dd71708e9e3905a5e7f7d3c6d9dae99b18c047d13c016a5eb9af9f526"} Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.227119 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.278791 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.279288 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.7792582 +0000 UTC m=+120.193220781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.381139 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.382331 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.882310098 +0000 UTC m=+120.296272559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.481260 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.483280 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lhgtf"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.490564 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.490989 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.491799 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.492433 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:02.992408618 +0000 UTC m=+120.406371079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.493691 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-btpkr"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.522478 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.593654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.594571 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.094549332 +0000 UTC m=+120.508511793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.747579 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.748231 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.24819997 +0000 UTC m=+120.662162441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.755950 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-477z9"] Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.854091 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.854959 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.354935843 +0000 UTC m=+120.768898314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.962180 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.963497 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.463391 +0000 UTC m=+120.877353481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:02 crc kubenswrapper[5099]: I0121 18:16:02.966086 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:02 crc kubenswrapper[5099]: E0121 18:16:02.967379 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.467361613 +0000 UTC m=+120.881324074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.067472 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.067615 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.567546097 +0000 UTC m=+120.981508568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.068336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.069862 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.569842156 +0000 UTC m=+120.983804677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.081534 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg"] Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.170338 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.170793 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.67077604 +0000 UTC m=+121.084738501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.171080 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=42.171046237 podStartE2EDuration="42.171046237s" podCreationTimestamp="2026-01-21 18:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.165067461 +0000 UTC m=+120.579029922" watchObservedRunningTime="2026-01-21 18:16:03.171046237 +0000 UTC m=+120.585008698" Jan 21 18:16:03 crc kubenswrapper[5099]: W0121 18:16:03.210003 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5a1f9f_a6df_4d87_bc2d_509d2632fb32.slice/crio-58dc3816929bb569a56a1a32cbdc86ba9ad182a0cb9e56a40597cee0522b2300 WatchSource:0}: Error finding container 58dc3816929bb569a56a1a32cbdc86ba9ad182a0cb9e56a40597cee0522b2300: Status 404 returned error can't find the container with id 58dc3816929bb569a56a1a32cbdc86ba9ad182a0cb9e56a40597cee0522b2300 Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.272502 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.273058 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.773040777 +0000 UTC m=+121.187003238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.342164 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" event={"ID":"ee64e319-d2fd-4a23-808e-a4ab684a16af","Type":"ContainerStarted","Data":"4562a5d2fe43e6c145b6d94e9938a0b0ad34cc27ba472d905140045b183cd61c"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.381838 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" event={"ID":"9ab60787-a0f6-4772-96ae-8278cdada627","Type":"ContainerStarted","Data":"8d33c0ac1f56cf5a3496d0fc29737e89b0df14c23fd72d26d3e76e454595f5d3"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.383024 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.383356 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.883331262 +0000 UTC m=+121.297293723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.484935 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.485754 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:03.985708142 +0000 UTC m=+121.399670593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.561240 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" event={"ID":"3df86cb4-acbc-40de-9991-9ba4cc6d0397","Type":"ContainerStarted","Data":"40ac28993e1009b7c91baf18376543244899c5cc6a5857662578f475ba0946c9"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.569125 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" event={"ID":"b99b91a3-bde7-4051-b805-2b015cbd3ab6","Type":"ContainerStarted","Data":"ac1abbbc009f4ac74b9828f1a28500236001fd34a5aaa28b979b18b1d6e96714"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.588985 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.592029 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.091997123 +0000 UTC m=+121.505959584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.621250 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" event={"ID":"9f61a6cf-7081-41ed-9e89-05212a634fb0","Type":"ContainerStarted","Data":"b2847d06fec6892172f8002db70a8b13c1d50df59c8382262e8c67bd6faceb79"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.637257 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kbdp8" podStartSLOduration=92.637215945 podStartE2EDuration="1m32.637215945s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.598759889 +0000 UTC m=+121.012722350" watchObservedRunningTime="2026-01-21 18:16:03.637215945 +0000 UTC m=+121.051178416" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.653246 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" event={"ID":"282f137b-885c-4e38-ac24-c35a21457457","Type":"ContainerStarted","Data":"e618958dfa212394dac0cea11c2975ca856ad39c583dca3c08416dd89e5a98aa"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.689256 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgxpb" event={"ID":"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7","Type":"ContainerStarted","Data":"0cd9480697ad17eecdba42521fa5f014a8698de149b2adada315b5fbdee47941"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.689890 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgxpb" event={"ID":"26f1bd0a-19bb-49e5-9f2c-3e93d2703cc7","Type":"ContainerStarted","Data":"34777cbfa7184a56c4134378e12a01508d2573635b158924e95c824c63868609"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.691174 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d55cs" podStartSLOduration=92.69113368 podStartE2EDuration="1m32.69113368s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.690891864 +0000 UTC m=+121.104854325" watchObservedRunningTime="2026-01-21 18:16:03.69113368 +0000 UTC m=+121.105096151" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.703727 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.711129 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.211094626 +0000 UTC m=+121.625057087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.718550 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" event={"ID":"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32","Type":"ContainerStarted","Data":"58dc3816929bb569a56a1a32cbdc86ba9ad182a0cb9e56a40597cee0522b2300"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.732512 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" event={"ID":"287553f1-f80f-47bb-8a01-1930cd0e5d2c","Type":"ContainerStarted","Data":"5059f5471b188309679edb3839e9364868fafc01fc48bf8d4703678d454551bb"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.732587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" event={"ID":"287553f1-f80f-47bb-8a01-1930cd0e5d2c","Type":"ContainerStarted","Data":"f09cd4f643e289e938dcf7fd7823af55c85fb3c0152976b6168f8617fb420273"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.734374 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw"] Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.738181 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" event={"ID":"11b4e369-201a-410c-a66c-9612fc9fafa8","Type":"ContainerStarted","Data":"7cfdf2de6fcc9086924dbf3d53eb31c10e11fb93198499e8bb4677422b19c0f5"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.735560 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dgxpb" podStartSLOduration=6.73554676 podStartE2EDuration="6.73554676s" podCreationTimestamp="2026-01-21 18:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.728033916 +0000 UTC m=+121.141996387" watchObservedRunningTime="2026-01-21 18:16:03.73554676 +0000 UTC m=+121.149509221" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.739966 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" event={"ID":"cc955325-c8fa-4454-ab18-2d7ea44f7da4","Type":"ContainerStarted","Data":"cd04b8ab54f49a167059bf8457b125bac85642df307ae128393352788217c13e"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.741755 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" event={"ID":"ad44bdbe-5009-4b21-ad83-21185ec2d86d","Type":"ContainerStarted","Data":"ba697192c4cbf5092a5f3e7374cda7eb0d0a684b5d8dd85c11779fd5d3518b15"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.753400 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" event={"ID":"e0db719c-cb3c-4c7d-ab76-20a341a011e6","Type":"ContainerStarted","Data":"2bcaca77323bcd511b8997b22a3a2d0da33edaf8b87c5eed369695473a8a4798"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.753780 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.816611 5099 generic.go:358] "Generic (PLEG): container finished" podID="0494dafa-d272-45bf-a11e-7ca78f92223d" containerID="b10ea11d9c02fee5ef3cc31f9536b31d3a45ef9a90b935ab6f7bd5fa92d3a205" exitCode=0 Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.819511 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" event={"ID":"0494dafa-d272-45bf-a11e-7ca78f92223d","Type":"ContainerDied","Data":"b10ea11d9c02fee5ef3cc31f9536b31d3a45ef9a90b935ab6f7bd5fa92d3a205"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.824956 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.828434 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.328366053 +0000 UTC m=+121.742328514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.835989 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9"] Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.837259 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podStartSLOduration=92.837221682 podStartE2EDuration="1m32.837221682s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.817289076 +0000 UTC m=+121.231251547" watchObservedRunningTime="2026-01-21 18:16:03.837221682 +0000 UTC m=+121.251184143" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.838513 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" event={"ID":"0ffb7e64-0677-44ec-971d-fda3f9b87e2d","Type":"ContainerStarted","Data":"dc5b5e4c482bbc42838583b156e3a96e0cfba77c4d5cb93cf0e41e9f3ac3a4f8"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.843647 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" event={"ID":"85ddc24f-5591-4300-9269-cbc659dc7b4f","Type":"ContainerStarted","Data":"3aee521a344ef0d410860d95f89e5e08d1609ba13c6f9cb6a92e0275b7e865b6"} Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.867895 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.885120 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podStartSLOduration=5.885082261 podStartE2EDuration="5.885082261s" podCreationTimestamp="2026-01-21 18:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.884120666 +0000 UTC m=+121.298083127" watchObservedRunningTime="2026-01-21 18:16:03.885082261 +0000 UTC m=+121.299044732" Jan 21 18:16:03 crc kubenswrapper[5099]: I0121 18:16:03.967219 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:03 crc kubenswrapper[5099]: E0121 18:16:03.976749 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.476715933 +0000 UTC m=+121.890678394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:03.999615 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" podStartSLOduration=92.999580105 podStartE2EDuration="1m32.999580105s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:03.979243169 +0000 UTC m=+121.393205650" watchObservedRunningTime="2026-01-21 18:16:03.999580105 +0000 UTC m=+121.413542566" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.070229 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.071013 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.570962413 +0000 UTC m=+121.984924874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.167284 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.167325 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.167339 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.172883 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.173963 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.673939719 +0000 UTC m=+122.087902190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: W0121 18:16:04.190149 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f5bf46f_e39c_4fa5_9ec3_24912f616295.slice/crio-59644e3a93607f4db4f696a8ab3d0303dd958c92fc6f21d332e8598bd03860e8 WatchSource:0}: Error finding container 59644e3a93607f4db4f696a8ab3d0303dd958c92fc6f21d332e8598bd03860e8: Status 404 returned error can't find the container with id 59644e3a93607f4db4f696a8ab3d0303dd958c92fc6f21d332e8598bd03860e8 Jan 21 18:16:04 crc kubenswrapper[5099]: W0121 18:16:04.200287 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod039025e0_e2cb_479d_b87a_9966fa3d96f2.slice/crio-f0afc5d9a06b8e098cb89daa8e62c058d4a26b1467876cbd52f284d14acd68e9 WatchSource:0}: Error finding container f0afc5d9a06b8e098cb89daa8e62c058d4a26b1467876cbd52f284d14acd68e9: Status 404 returned error can't find the container with id f0afc5d9a06b8e098cb89daa8e62c058d4a26b1467876cbd52f284d14acd68e9 Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.274617 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.275379 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.775352284 +0000 UTC m=+122.189314755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.393553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.394889 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.894865708 +0000 UTC m=+122.308828169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.406419 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-6r5xz"] Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.406433 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.413543 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.427989 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.432381 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-zlpql"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.440473 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6qnjf"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.459865 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.462335 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-ddw2j"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.470705 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-48lzl"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.482598 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.489592 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.492115 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.493466 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.494750 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.495705 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.496022 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:04.995995866 +0000 UTC m=+122.409958327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.497286 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.509723 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:04 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:04 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:04 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.509829 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.517418 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.517491 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2ttk"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.519171 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xfrc5"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.521721 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lkws9"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.523083 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r6vwz"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.525776 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spkz4"] Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.526642 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l"] Jan 21 18:16:04 crc kubenswrapper[5099]: W0121 18:16:04.527926 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda3a0959_1a85_473a_95d5_51b77e30c5da.slice/crio-f77588173a6247ba4cd02724375a31e6c39b4350bb2a4c50f67a4242e8d252b1 WatchSource:0}: Error finding container f77588173a6247ba4cd02724375a31e6c39b4350bb2a4c50f67a4242e8d252b1: Status 404 returned error can't find the container with id f77588173a6247ba4cd02724375a31e6c39b4350bb2a4c50f67a4242e8d252b1 Jan 21 18:16:04 crc kubenswrapper[5099]: W0121 18:16:04.529363 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a0e83c_f043_4329_95ac_4cc0a6ac538f.slice/crio-433a8e825b48540804ad1e284162b2df710a627c20a99fb203f19e2e83ccb5a3 WatchSource:0}: Error finding container 433a8e825b48540804ad1e284162b2df710a627c20a99fb203f19e2e83ccb5a3: Status 404 returned error can't find the container with id 433a8e825b48540804ad1e284162b2df710a627c20a99fb203f19e2e83ccb5a3 Jan 21 18:16:04 crc kubenswrapper[5099]: W0121 18:16:04.531636 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac0d4bff_1835_45f9_bca5_e84de2f1c705.slice/crio-cbaf70035f857ec399c3145c754e5f4b24aeab9fcf5e0ecc1c9376a06227a7dd WatchSource:0}: Error finding container cbaf70035f857ec399c3145c754e5f4b24aeab9fcf5e0ecc1c9376a06227a7dd: Status 404 returned error can't find the container with id cbaf70035f857ec399c3145c754e5f4b24aeab9fcf5e0ecc1c9376a06227a7dd Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.556270 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.598378 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.602131 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.102106133 +0000 UTC m=+122.516068594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.701012 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.701514 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.201488506 +0000 UTC m=+122.615450967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.803421 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.804099 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.304073081 +0000 UTC m=+122.718035542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.863977 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" event={"ID":"e0db719c-cb3c-4c7d-ab76-20a341a011e6","Type":"ContainerStarted","Data":"f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.870009 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" event={"ID":"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d","Type":"ContainerStarted","Data":"42a36770f2dd53adcf425b387da5acf4fefdab1d4dd6b5b747ef9b321b34980b"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.900836 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lkws9" event={"ID":"2805b946-8ae5-4a2f-8ae0-e5fb058174e7","Type":"ContainerStarted","Data":"2ab3032391b5ea20c6218c2d8f9a4de8cac321c7b42e9db21b27531ddf2117c1"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.909027 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:04 crc kubenswrapper[5099]: E0121 18:16:04.910649 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.41062556 +0000 UTC m=+122.824588021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.916606 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.920089 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" event={"ID":"ac0d4bff-1835-45f9-bca5-e84de2f1c705","Type":"ContainerStarted","Data":"cbaf70035f857ec399c3145c754e5f4b24aeab9fcf5e0ecc1c9376a06227a7dd"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.938108 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" event={"ID":"d0d4f813-c328-441b-963b-5241f73f9da2","Type":"ContainerStarted","Data":"59c74d8e5b5cab87be6721879a4c7fe994c7d0a10ffd2454f481d8d262bb4e92"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.949317 5099 generic.go:358] "Generic (PLEG): container finished" podID="ee64e319-d2fd-4a23-808e-a4ab684a16af" containerID="febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9" exitCode=0 Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.950420 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" event={"ID":"ee64e319-d2fd-4a23-808e-a4ab684a16af","Type":"ContainerDied","Data":"febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.978229 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" event={"ID":"9ab60787-a0f6-4772-96ae-8278cdada627","Type":"ContainerStarted","Data":"a515ec3e4cf7c6511f56027d9424dc8baa4f0de873aa91edac2cf16f68622b47"} Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.979290 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:16:04 crc kubenswrapper[5099]: I0121 18:16:04.988791 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-48lzl" event={"ID":"8855b7e4-a1e8-41ae-b995-832120b0bdcd","Type":"ContainerStarted","Data":"6db87d1d3beb175cd795cc23ced5f84ddd20631149fde774e742b84844d880a5"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.014748 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" podStartSLOduration=94.014488558 podStartE2EDuration="1m34.014488558s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.01221151 +0000 UTC m=+122.426173971" watchObservedRunningTime="2026-01-21 18:16:05.014488558 +0000 UTC m=+122.428451019" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.015567 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.016248 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.516230554 +0000 UTC m=+122.930193015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.016341 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nx9kc" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.050162 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" event={"ID":"b99b91a3-bde7-4051-b805-2b015cbd3ab6","Type":"ContainerStarted","Data":"5b75da4e4fc830523c5cd9461dce496c1f6a115ee3e9d828e11dc9dd948cacd4"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.156833 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.157250 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.657222903 +0000 UTC m=+123.071185364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.184004 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-nb9km" podStartSLOduration=94.183979286 podStartE2EDuration="1m34.183979286s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.181307086 +0000 UTC m=+122.595269547" watchObservedRunningTime="2026-01-21 18:16:05.183979286 +0000 UTC m=+122.597941747" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.187777 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" event={"ID":"d6a1f7e0-130a-4bf8-8602-8b1800b7de37","Type":"ContainerStarted","Data":"20eaa909a1e6ffa553e7ce529022998150398df8e7238bf18356029d4d9ea13e"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.196428 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" event={"ID":"8f5bf46f-e39c-4fa5-9ec3-24912f616295","Type":"ContainerStarted","Data":"59644e3a93607f4db4f696a8ab3d0303dd958c92fc6f21d332e8598bd03860e8"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.225226 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" event={"ID":"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32","Type":"ContainerStarted","Data":"c5dda360082b9263be5d3b834ac29af62683ee2862a6d5aedf12effc4768064b"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.229881 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" event={"ID":"287553f1-f80f-47bb-8a01-1930cd0e5d2c","Type":"ContainerStarted","Data":"d9f4e638c8d34646396e6fb4dd659133a1ed7515051215dad9444e55ba6f3941"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.239208 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spkz4" event={"ID":"a8454077-b0c4-4cdc-80e0-a620312eec57","Type":"ContainerStarted","Data":"028dc9f91e51579f396c0ae6be1a1dc77f9cc08dbb0d7e31a91348e78f767bc0"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.251441 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zlpql" event={"ID":"bee88171-b2f0-49bb-92aa-8a0d79d87cb7","Type":"ContainerStarted","Data":"e11ecd64a65c83503585ea296ca523d2aae90c7ac59ac1e57bbdbc2fe07fc589"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.257119 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-c6bc8" podStartSLOduration=94.257099319 podStartE2EDuration="1m34.257099319s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.255704003 +0000 UTC m=+122.669666464" watchObservedRunningTime="2026-01-21 18:16:05.257099319 +0000 UTC m=+122.671061770" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.258373 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.259280 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.759257565 +0000 UTC m=+123.173220026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.276583 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" event={"ID":"2dde6863-5960-4b1b-b694-be1862901fb0","Type":"ContainerStarted","Data":"e2b28e39e9faef0d5aa63b3848e697521ab22bb4fa8e2eea1b753f0c4fdfd423"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.279225 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" event={"ID":"cb4ca0e8-efd8-493c-8784-2e28266561eb","Type":"ContainerStarted","Data":"97c85e3564f8e88414532030c9c2388ae3ba3e5737880abb9e89e640534232b8"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.284511 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" event={"ID":"f08a4565-4bb3-44b9-90e0-1b841c3127ea","Type":"ContainerStarted","Data":"70a8bb236f0780cdc5b80cbe7d8f72b680bf098f397e54e8f3be21d6e23bed9d"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.286354 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-6r5xz" event={"ID":"cf468a9b-3840-46c0-8390-79ec278be1d0","Type":"ContainerStarted","Data":"d60d314a50213af14d7d1199935a90088d6e7125170eb028569d1cfa908722b0"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.288772 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" event={"ID":"2925eaf0-c587-4c7f-a246-5b64c7103637","Type":"ContainerStarted","Data":"b6d9edb1ed7aa4ca9d5a88f9af9107e9601315018c738be83faa50a8f80a3396"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.290104 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" event={"ID":"2b370d45-15f6-4f78-90d8-f15bb7f31949","Type":"ContainerStarted","Data":"d3c9b9941c506b0ac90febc73d18d24c285e3be97e58bf0384d59ed2d6eee5bf"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.291001 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-ddw2j" event={"ID":"63bfe3eb-44bd-45db-8327-52468bb9ca12","Type":"ContainerStarted","Data":"5f0131ad9c394f2c1a21cf1118bdf610bb0d5c998936d029d4584ed3a4eed71b"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.292147 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" event={"ID":"39b31197-feb5-4a81-8dca-de4b873dc013","Type":"ContainerStarted","Data":"162061321f5a4c16b240cfbee6a8e08376d6c8b648c3ca85315663b4fa746474"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.293071 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" event={"ID":"cc660b0c-3432-4bfa-8349-0f7ac08afce8","Type":"ContainerStarted","Data":"9a171b9aec4c6dbd633bf0429781591b30ec1f02a38429449e89d2ddfb060543"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.294642 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" event={"ID":"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0","Type":"ContainerStarted","Data":"31aa54e3d259029d2e065302cb6a3ab605cc060a78e4f946538f6345a554a8de"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.294679 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" event={"ID":"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0","Type":"ContainerStarted","Data":"dcf9ebe3a0f48c6ceec843af25439682443a6ae2670d65682e38d944ab57d4fd"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.297432 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ffb7e64-0677-44ec-971d-fda3f9b87e2d" containerID="6d63a59bc8e5e2cfe3266b18627079ec48c8bf6fb9dd696b16751a4c083b15f7" exitCode=0 Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.297541 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" event={"ID":"0ffb7e64-0677-44ec-971d-fda3f9b87e2d","Type":"ContainerDied","Data":"6d63a59bc8e5e2cfe3266b18627079ec48c8bf6fb9dd696b16751a4c083b15f7"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.298640 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" event={"ID":"09b91830-9a07-4d48-9435-c5f7e9c2a402","Type":"ContainerStarted","Data":"aee3e63644503503aa72210cd02023900542c1e51835b4503ec9990c899ac18a"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.306664 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" event={"ID":"9f61a6cf-7081-41ed-9e89-05212a634fb0","Type":"ContainerStarted","Data":"b91f85dd2b12063e4eebbc8521ea0027ab3759849983328aba120ab372a1e03e"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.307096 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.309866 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerStarted","Data":"433a8e825b48540804ad1e284162b2df710a627c20a99fb203f19e2e83ccb5a3"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.311123 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" event={"ID":"039025e0-e2cb-479d-b87a-9966fa3d96f2","Type":"ContainerStarted","Data":"f0afc5d9a06b8e098cb89daa8e62c058d4a26b1467876cbd52f284d14acd68e9"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.311990 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" event={"ID":"05e481c5-0ad1-4c76-bf43-a32b82b763c7","Type":"ContainerStarted","Data":"c296b4466ceb446f4719719f2e61cb4b606acf9ee809a9cfe3bd2c9c479a8854"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.313037 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" event={"ID":"419c7428-8eea-4a26-8329-f359a77e5c80","Type":"ContainerStarted","Data":"df4964a3c1a943b3d19bae1ff038aa71dffad974252aa0f7125a231663569bdc"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.315845 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" event={"ID":"11b4e369-201a-410c-a66c-9612fc9fafa8","Type":"ContainerStarted","Data":"32ad5b378d43ea54b76009204f8371f13f07081e1f29ab330085155e0c040c56"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.323925 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" event={"ID":"cc955325-c8fa-4454-ab18-2d7ea44f7da4","Type":"ContainerStarted","Data":"fc93e56caba2a31838fec6441a8e25a9ee1e093a2909fbc74fdd79a2621bcb95"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.328634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" event={"ID":"da3a0959-1a85-473a-95d5-51b77e30c5da","Type":"ContainerStarted","Data":"f77588173a6247ba4cd02724375a31e6c39b4350bb2a4c50f67a4242e8d252b1"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.329936 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" event={"ID":"c13f0ecd-bdc7-4f94-9013-3277f1b20451","Type":"ContainerStarted","Data":"b68ea60cc652b9cc5c4f5b7d1bdc6183d1e17e62e4778a04bd78bfcb53828b5d"} Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.344420 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fr9d" podStartSLOduration=94.344398299 podStartE2EDuration="1m34.344398299s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.343094805 +0000 UTC m=+122.757057276" watchObservedRunningTime="2026-01-21 18:16:05.344398299 +0000 UTC m=+122.758360760" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.367044 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.367133 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.867115447 +0000 UTC m=+123.281077898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.368146 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" podStartSLOduration=94.368091842 podStartE2EDuration="1m34.368091842s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.367444485 +0000 UTC m=+122.781406946" watchObservedRunningTime="2026-01-21 18:16:05.368091842 +0000 UTC m=+122.782054303" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.370382 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.373707 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.873677517 +0000 UTC m=+123.287640178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.399970 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" podStartSLOduration=94.399945877 podStartE2EDuration="1m34.399945877s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.397623597 +0000 UTC m=+122.811586058" watchObservedRunningTime="2026-01-21 18:16:05.399945877 +0000 UTC m=+122.813908338" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.422136 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zjwsj"] Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.428826 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vs6xv" podStartSLOduration=94.428807394 podStartE2EDuration="1m34.428807394s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:05.427318875 +0000 UTC m=+122.841281356" watchObservedRunningTime="2026-01-21 18:16:05.428807394 +0000 UTC m=+122.842769855" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.471632 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.472067 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.971977181 +0000 UTC m=+123.385939642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.472559 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.473271 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:05.973256855 +0000 UTC m=+123.387219316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.499857 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:05 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:05 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:05 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.499924 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.573810 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.574032 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.073996143 +0000 UTC m=+123.487958604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.574243 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.574531 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.074518486 +0000 UTC m=+123.488480947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.683975 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.684352 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.184242627 +0000 UTC m=+123.598205088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.684624 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.685197 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.18517652 +0000 UTC m=+123.599138991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.687821 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.788656 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.788860 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.288837694 +0000 UTC m=+123.702800155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.789177 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.789585 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.289574533 +0000 UTC m=+123.703536984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.914583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.914847 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.414818925 +0000 UTC m=+123.828781386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:05 crc kubenswrapper[5099]: I0121 18:16:05.915317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:05 crc kubenswrapper[5099]: E0121 18:16:05.915648 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.415634117 +0000 UTC m=+123.829596578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.018718 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.019210 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.519182157 +0000 UTC m=+123.933144618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.122362 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.122903 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.622882292 +0000 UTC m=+124.036844753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.231478 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.231663 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.731639738 +0000 UTC m=+124.145602199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.231921 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.232377 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.732356286 +0000 UTC m=+124.146318747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.334233 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.336849 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:06.83682498 +0000 UTC m=+124.250787451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.567460 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.577774 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.077755957 +0000 UTC m=+124.491718418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.615075 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:06 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:06 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:06 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.615158 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.674306 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.674545 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.174509352 +0000 UTC m=+124.588471813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.674923 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.675340 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.175319183 +0000 UTC m=+124.589281644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.743951 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.778263 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.778835 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.278810122 +0000 UTC m=+124.692772573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.881137 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.881638 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.381623784 +0000 UTC m=+124.795586245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.897583 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spkz4" event={"ID":"a8454077-b0c4-4cdc-80e0-a620312eec57","Type":"ContainerStarted","Data":"7fd5c3b53dcf29a5a372761390338961d39b8d94dd8f17df6b18ad87ba67a8e9"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.897648 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zlpql" event={"ID":"bee88171-b2f0-49bb-92aa-8a0d79d87cb7","Type":"ContainerStarted","Data":"589a5c693a83d931b05c6c81c33f74e130efe642df03d2df2c5e21874f113621"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.897668 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.899626 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.906515 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.906600 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.916583 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.944083 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.944509 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.944524 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.944534 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" event={"ID":"2dde6863-5960-4b1b-b694-be1862901fb0","Type":"ContainerStarted","Data":"5ce0496f11d2cd4fc7d9d26d205e8431eb17ccb353db2c93b5b0471be319b1ea"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.944554 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.945440 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.957038 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.971035 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-6r5xz" event={"ID":"cf468a9b-3840-46c0-8390-79ec278be1d0","Type":"ContainerStarted","Data":"3084a312ecb3ca809653f662953e35a8e083cdd178038736a4fc1e3bd9b4a83f"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.971096 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" event={"ID":"2925eaf0-c587-4c7f-a246-5b64c7103637","Type":"ContainerStarted","Data":"58be2c87b4d740cb3712bfe4eec915723934bfdadd52d2855998af318045d160"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.971120 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" event={"ID":"2b370d45-15f6-4f78-90d8-f15bb7f31949","Type":"ContainerStarted","Data":"b2f874c0283684617dc398fbca2f039a81c7b5f175cc28d51f7f920ccc2414e9"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.971143 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-ddw2j" event={"ID":"63bfe3eb-44bd-45db-8327-52468bb9ca12","Type":"ContainerStarted","Data":"a7d9c40a0447e6c795b191e854e102b7b4d39c5064f864a16b1a9881dbe5ad58"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.972059 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.972831 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.973692 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" event={"ID":"39b31197-feb5-4a81-8dca-de4b873dc013","Type":"ContainerStarted","Data":"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.975809 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.975872 5099 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6qnjf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.975907 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.982262 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:06 crc kubenswrapper[5099]: E0121 18:16:06.982640 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.482621888 +0000 UTC m=+124.896584349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.984945 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xr84c container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.984979 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" podUID="2b370d45-15f6-4f78-90d8-f15bb7f31949" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.991376 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" event={"ID":"bd987c53-2ea5-4da5-a0ea-bb16dc45cdb0","Type":"ContainerStarted","Data":"76fcabf7999ac9ba9d7eb9741500ef05bb63886b89ebbdc5c0867d00704c5ddd"} Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.991832 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjbt9" podStartSLOduration=95.991820357 podStartE2EDuration="1m35.991820357s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:06.95758658 +0000 UTC m=+124.371549031" watchObservedRunningTime="2026-01-21 18:16:06.991820357 +0000 UTC m=+124.405782818" Jan 21 18:16:06 crc kubenswrapper[5099]: I0121 18:16:06.992471 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.009029 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" event={"ID":"09b91830-9a07-4d48-9435-c5f7e9c2a402","Type":"ContainerStarted","Data":"90422f8305f6161afe6070660a3acd71e0cc062d1aeeecaf6a8a841c3353ce6e"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.042574 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-spkz4" podStartSLOduration=9.04254863 podStartE2EDuration="9.04254863s" podCreationTimestamp="2026-01-21 18:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.039623794 +0000 UTC m=+124.453586265" watchObservedRunningTime="2026-01-21 18:16:07.04254863 +0000 UTC m=+124.456511101" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.046886 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerStarted","Data":"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.048703 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.065557 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lxg2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.065637 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.068572 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083466 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083515 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wq9\" (UniqueName: \"kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083572 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083707 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083721 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083885 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.083985 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576mp\" (UniqueName: \"kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.084112 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvht\" (UniqueName: \"kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.110454 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.116092 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.616049822 +0000 UTC m=+125.030012283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.129071 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.132835 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" event={"ID":"039025e0-e2cb-479d-b87a-9966fa3d96f2","Type":"ContainerStarted","Data":"b3b80f5f7ed86953afe772be8f1b119af76cadb809886ce0d69e8cbd1609b403"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.134964 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.170865 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-4j82v container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.170947 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" podUID="039025e0-e2cb-479d-b87a-9966fa3d96f2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.186504 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vr7cg" event={"ID":"419c7428-8eea-4a26-8329-f359a77e5c80","Type":"ContainerStarted","Data":"4d0517b2b0c13ee7ae3ccf2e75b81c399d8bdafcbf931deb3adc90acf26d8937"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.187828 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.188142 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.188255 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.688234541 +0000 UTC m=+125.102197002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.188370 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvht\" (UniqueName: \"kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.188704 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189191 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wq9\" (UniqueName: \"kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189333 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189437 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189592 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189683 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189775 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189913 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.189994 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdbt\" (UniqueName: \"kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.190156 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.190322 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-576mp\" (UniqueName: \"kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.194903 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.195686 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.196938 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.696922646 +0000 UTC m=+125.110885107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.196992 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.197168 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.197648 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.214174 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.262248 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" event={"ID":"c13f0ecd-bdc7-4f94-9013-3277f1b20451","Type":"ContainerStarted","Data":"6b5d9be326035cb42f23634577a1832334927b7bc0f17e35702c8e928c7a5fd0"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.287087 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-576mp\" (UniqueName: \"kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp\") pod \"certified-operators-6fsvr\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.287178 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wq9\" (UniqueName: \"kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9\") pod \"community-operators-xdblj\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.289349 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvht\" (UniqueName: \"kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht\") pod \"certified-operators-v9zdl\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.294437 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.295371 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.295454 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdbt\" (UniqueName: \"kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.295714 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.296125 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.796105604 +0000 UTC m=+125.210068065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.296649 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.297416 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.306066 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-zlpql" podStartSLOduration=96.306048201 podStartE2EDuration="1m36.306048201s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.183563431 +0000 UTC m=+124.597525912" watchObservedRunningTime="2026-01-21 18:16:07.306048201 +0000 UTC m=+124.720010662" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.344986 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" event={"ID":"178950b5-b1b9-4d7d-90b1-ba4fb79fd10d","Type":"ContainerStarted","Data":"0514f8b8f81a160dafe0953a15835f3334fccc2c9e0aaa932e40c27a06bc74c8"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.393497 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" event={"ID":"0494dafa-d272-45bf-a11e-7ca78f92223d","Type":"ContainerStarted","Data":"66208561d8a93ec0021848441d362f4d3e1f07024791286908f86ffa4f07740f"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.398613 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdbt\" (UniqueName: \"kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt\") pod \"community-operators-x5qmz\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.399706 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.400027 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:07.900013493 +0000 UTC m=+125.313975944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.417923 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lkws9" event={"ID":"2805b946-8ae5-4a2f-8ae0-e5fb058174e7","Type":"ContainerStarted","Data":"5435453b4089c5d7a0113b14e95958721e4ca1bfe557dd269d876d2b97f3f54d"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.494962 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" podStartSLOduration=96.494923781 podStartE2EDuration="1m36.494923781s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.487512089 +0000 UTC m=+124.901474540" watchObservedRunningTime="2026-01-21 18:16:07.494923781 +0000 UTC m=+124.908886242" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.495688 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-fwthw" podStartSLOduration=96.49568048 podStartE2EDuration="1m36.49568048s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.320276529 +0000 UTC m=+124.734238990" watchObservedRunningTime="2026-01-21 18:16:07.49568048 +0000 UTC m=+124.909642941" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.503123 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" event={"ID":"ac0d4bff-1835-45f9-bca5-e84de2f1c705","Type":"ContainerStarted","Data":"7637eabd2246632aadcb064c7aa588d276d462c28ab9ed8de76cb8f066a0663a"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.507150 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:07 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:07 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:07 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.507270 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.527558 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.527921 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.027876744 +0000 UTC m=+125.441839215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.528018 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.529306 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.02929801 +0000 UTC m=+125.443260471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.540383 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.541384 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" event={"ID":"d6a1f7e0-130a-4bf8-8602-8b1800b7de37","Type":"ContainerStarted","Data":"af492c5c9dca8b690c564d9198b5ecfbc2ec013962600beb54af82cfd770ffbe"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.557648 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.560551 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" event={"ID":"8f5bf46f-e39c-4fa5-9ec3-24912f616295","Type":"ContainerStarted","Data":"6eeb3ef33504ea0bc2bf3a21ba08ecd92737cad3b61928681efa9cc0d6388b6d"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.582996 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-6r5xz" podStartSLOduration=96.5829658 podStartE2EDuration="1m36.5829658s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.576327498 +0000 UTC m=+124.990289949" watchObservedRunningTime="2026-01-21 18:16:07.5829658 +0000 UTC m=+124.996928271" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.635929 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.636495 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.136450695 +0000 UTC m=+125.550413286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.636654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.638314 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.138304042 +0000 UTC m=+125.552266503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.653401 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-ddw2j" podStartSLOduration=96.653380432 podStartE2EDuration="1m36.653380432s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.652359346 +0000 UTC m=+125.066321827" watchObservedRunningTime="2026-01-21 18:16:07.653380432 +0000 UTC m=+125.067342893" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.653880 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" event={"ID":"0e5a1f9f-a6df-4d87-bc2d-509d2632fb32","Type":"ContainerStarted","Data":"fffc20d77d1a3fb378576cf57a010fdb6177e9f6439b5eac8e2c5c8b968aac20"} Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.654935 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.656215 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.657213 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57396: no serving certificate available for the kubelet" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.742933 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.743147 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.243120096 +0000 UTC m=+125.657082557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.753783 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" podStartSLOduration=96.753758712 podStartE2EDuration="1m36.753758712s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.753438203 +0000 UTC m=+125.167400664" watchObservedRunningTime="2026-01-21 18:16:07.753758712 +0000 UTC m=+125.167721173" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.754170 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.254155591 +0000 UTC m=+125.668118052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.753830 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.878473 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.879774 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.379745683 +0000 UTC m=+125.793708144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.903489 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57402: no serving certificate available for the kubelet" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.954277 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-6ww7n" podStartSLOduration=96.954258672 podStartE2EDuration="1m36.954258672s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:07.90396615 +0000 UTC m=+125.317928611" watchObservedRunningTime="2026-01-21 18:16:07.954258672 +0000 UTC m=+125.368221133" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.969562 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57414: no serving certificate available for the kubelet" Jan 21 18:16:07 crc kubenswrapper[5099]: I0121 18:16:07.984454 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:07 crc kubenswrapper[5099]: E0121 18:16:07.984842 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.484828773 +0000 UTC m=+125.898791234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.013487 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podStartSLOduration=97.013465404 podStartE2EDuration="1m37.013465404s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:08.012345375 +0000 UTC m=+125.426307846" watchObservedRunningTime="2026-01-21 18:16:08.013465404 +0000 UTC m=+125.427427865" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.067455 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" podStartSLOduration=97.067429501 podStartE2EDuration="1m37.067429501s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:08.06120673 +0000 UTC m=+125.475169191" watchObservedRunningTime="2026-01-21 18:16:08.067429501 +0000 UTC m=+125.481391962" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.070760 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57430: no serving certificate available for the kubelet" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.087594 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.087867 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.587823759 +0000 UTC m=+126.001786440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.088136 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.088966 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.588955788 +0000 UTC m=+126.002918249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.184383 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57434: no serving certificate available for the kubelet" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.187965 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-477z9" podStartSLOduration=97.187936021 podStartE2EDuration="1m37.187936021s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:08.187608523 +0000 UTC m=+125.601570984" watchObservedRunningTime="2026-01-21 18:16:08.187936021 +0000 UTC m=+125.601898482" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.189714 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.190386 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.690348714 +0000 UTC m=+126.104311175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.200024 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.200658 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.70063676 +0000 UTC m=+126.114599221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.243195 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-xhj5t" podStartSLOduration=97.24316499 podStartE2EDuration="1m37.24316499s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:08.238233903 +0000 UTC m=+125.652196374" watchObservedRunningTime="2026-01-21 18:16:08.24316499 +0000 UTC m=+125.657127451" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.277330 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57450: no serving certificate available for the kubelet" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.280979 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hkwx9" podStartSLOduration=97.280950949 podStartE2EDuration="1m37.280950949s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:08.277360636 +0000 UTC m=+125.691323097" watchObservedRunningTime="2026-01-21 18:16:08.280950949 +0000 UTC m=+125.694913410" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.304516 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.305433 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.805410522 +0000 UTC m=+126.219372983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.372600 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57452: no serving certificate available for the kubelet" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.406746 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.407137 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:08.907124055 +0000 UTC m=+126.321086516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.471316 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.511064 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.512014 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.01199565 +0000 UTC m=+126.425958111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.512468 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:08 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:08 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:08 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.512512 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.624499 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.624830 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.124814501 +0000 UTC m=+126.538776962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.632083 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57460: no serving certificate available for the kubelet" Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.717358 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.725951 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.726341 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.226321298 +0000 UTC m=+126.640283759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.836772 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.837726 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.337710042 +0000 UTC m=+126.751672493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:08 crc kubenswrapper[5099]: I0121 18:16:08.940964 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:08 crc kubenswrapper[5099]: E0121 18:16:08.941369 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.441338645 +0000 UTC m=+126.855301106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.043701 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.044200 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.544186137 +0000 UTC m=+126.958148598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.144602 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.144907 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.644853543 +0000 UTC m=+127.058816194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.247439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.247896 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.747878521 +0000 UTC m=+127.161840982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.358296 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.358522 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.858494874 +0000 UTC m=+127.272457335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.359027 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.359424 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57476: no serving certificate available for the kubelet" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.359659 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.859628794 +0000 UTC m=+127.273591275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.459936 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.460579 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:09.960531766 +0000 UTC m=+127.374494227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.500852 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:09 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:09 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:09 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.500967 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.561639 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.562144 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.062119415 +0000 UTC m=+127.476082056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.661163 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.661223 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.663238 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.665862 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.666239 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.166225431 +0000 UTC m=+127.580187892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702815 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702867 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702879 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702895 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" event={"ID":"cc660b0c-3432-4bfa-8349-0f7ac08afce8","Type":"ContainerStarted","Data":"d3eecfa84365c0b2f36306473451d9c18a59fe17cfc401bd443f8ddeeaab2c83"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702917 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702931 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" event={"ID":"05e481c5-0ad1-4c76-bf43-a32b82b763c7","Type":"ContainerStarted","Data":"462e144af1c83a94a48d6dab7d1525a6ce5af7900773a8993d3a5ed757c3fc9e"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702942 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" event={"ID":"da3a0959-1a85-473a-95d5-51b77e30c5da","Type":"ContainerStarted","Data":"e230b4a9d620a208ff70c7b5144c5cd967d4dd5d30efe8db1d66b3bc0f405b1a"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702955 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lkws9" event={"ID":"2805b946-8ae5-4a2f-8ae0-e5fb058174e7","Type":"ContainerStarted","Data":"ea0948cdf3e154999b382b9f6c07f64820c5f2a1f25842c25d7683b02adb6d2a"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702964 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" event={"ID":"ac0d4bff-1835-45f9-bca5-e84de2f1c705","Type":"ContainerStarted","Data":"59a07b7361dd67d77bc9a85a6dc87d965463de52b86e42d032168b75c506f635"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702981 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" event={"ID":"ee64e319-d2fd-4a23-808e-a4ab684a16af","Type":"ContainerStarted","Data":"327e864e55473812d0009f89d127f011c6cf2ea9ba0991141964a7672e776606"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.702991 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-48lzl" event={"ID":"8855b7e4-a1e8-41ae-b995-832120b0bdcd","Type":"ContainerStarted","Data":"6c2bb2ae6a0397a9c0a8872d353972999e7ac0e1df52814342650c6ac5b52643"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703002 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" event={"ID":"8f5bf46f-e39c-4fa5-9ec3-24912f616295","Type":"ContainerStarted","Data":"840bcb610219af08d5f2426fe06734a809c2017e3659ee0457cf6c6bd6f012cb"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703012 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" event={"ID":"2dde6863-5960-4b1b-b694-be1862901fb0","Type":"ContainerStarted","Data":"145bab1e0e2ec8340839e55eaac16056be650865239c83a1df5bbc0d673ba198"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703021 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" event={"ID":"cb4ca0e8-efd8-493c-8784-2e28266561eb","Type":"ContainerStarted","Data":"b613ee6ad0de0cc584357bc31cf67ce8629d5a217157b1b19bb3d59912491b92"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703031 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" event={"ID":"f08a4565-4bb3-44b9-90e0-1b841c3127ea","Type":"ContainerStarted","Data":"d27ed896c96ecb9a12c159e410c2927547d0369f2007dd38b9c71d2598a31bf7"} Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703158 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.703939 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.717780 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" gracePeriod=30 Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.735457 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.736077 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lxg2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.736110 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.743907 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.743989 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.770728 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.770791 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.771114 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ktb\" (UniqueName: \"kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.771216 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.771332 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmlr\" (UniqueName: \"kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.771527 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.771611 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.795554 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.295538668 +0000 UTC m=+127.709501129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.828622 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-48lzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.828712 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-48lzl" podUID="8855b7e4-a1e8-41ae-b995-832120b0bdcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.831653 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-4j82v" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.879847 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880248 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ktb\" (UniqueName: \"kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880290 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzmlr\" (UniqueName: \"kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880459 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.880487 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.881079 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.881180 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.381157384 +0000 UTC m=+127.795119845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.895446 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.895521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.895987 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.902047 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.974589 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.980984 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 18:16:09 crc kubenswrapper[5099]: I0121 18:16:09.982702 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:09 crc kubenswrapper[5099]: E0121 18:16:09.987047 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.487025845 +0000 UTC m=+127.900988306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.005152 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-48lzl" podStartSLOduration=99.005132214 podStartE2EDuration="1m39.005132214s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:10.004445456 +0000 UTC m=+127.418407917" watchObservedRunningTime="2026-01-21 18:16:10.005132214 +0000 UTC m=+127.419094675" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.091009 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerStarted","Data":"24532a9ecbd66301416e84f2d6cc17a024e02c4073fe452f629063fff5d5f1aa"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.091119 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.118681 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" event={"ID":"0494dafa-d272-45bf-a11e-7ca78f92223d","Type":"ContainerStarted","Data":"da06c6baed4f97e94c55eebe9eeeeb6b3bac741355306522c2dc5fcbb451d9dd"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.128209 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ktb\" (UniqueName: \"kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb\") pod \"redhat-marketplace-6gg9l\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.129537 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.130059 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.130237 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2cp\" (UniqueName: \"kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.130356 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.130646 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.630628692 +0000 UTC m=+128.044591153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.154463 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzmlr\" (UniqueName: \"kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr\") pod \"redhat-marketplace-nnlkl\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.155390 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerStarted","Data":"d46ec0805edc99b67ea2249f94d5e897903fff9c684a23e2c4aa1b573d7ca358"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.157537 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" event={"ID":"d6a1f7e0-130a-4bf8-8602-8b1800b7de37","Type":"ContainerStarted","Data":"a3388c614e87933e2d4a60bb7c14dca6b1c0901bf9d5193fe19bda40e3686f0c"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.164198 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" event={"ID":"f08a4565-4bb3-44b9-90e0-1b841c3127ea","Type":"ContainerStarted","Data":"d0ad63559930819f32877e445db49ef8606d1ca63c3716bae324d753286ef284"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.170010 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.171853 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.204365 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerStarted","Data":"f29c337e5870b2b15dce9eb6a197b542fb032e07b9a6d9512e7f82f28df4c245"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.208113 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.231412 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.231563 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.231667 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2cp\" (UniqueName: \"kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.231686 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.232811 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.233284 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.7332503 +0000 UTC m=+128.147212761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.234331 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.279503 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.279982 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerStarted","Data":"910acc839c2072b6526fe9d5adcb8d0bd7ac871d722b246a67f15aff86d60ad6"} Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.280211 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.283139 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.284531 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294558 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-48lzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294648 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-48lzl" podUID="8855b7e4-a1e8-41ae-b995-832120b0bdcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294702 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lxg2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294802 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294822 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.294913 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.295548 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.311304 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.331063 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-62qpd" podStartSLOduration=99.33103995 podStartE2EDuration="1m39.33103995s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:10.327001537 +0000 UTC m=+127.740964018" watchObservedRunningTime="2026-01-21 18:16:10.33103995 +0000 UTC m=+127.745002411" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.332932 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.333869 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.833845583 +0000 UTC m=+128.247808044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.438399 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.438455 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plm55\" (UniqueName: \"kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.438472 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.438555 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.443245 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:10.943222775 +0000 UTC m=+128.357185406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.503850 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:10 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:10 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:10 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.503928 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.540339 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.540706 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.540774 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plm55\" (UniqueName: \"kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.540795 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.541512 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.542230 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.042199107 +0000 UTC m=+128.456161568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.542662 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.557519 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.557567 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.559485 5099 patch_prober.go:28] interesting pod/apiserver-8596bd845d-btpkr container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.559563 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" podUID="0ffb7e64-0677-44ec-971d-fda3f9b87e2d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.642555 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.643028 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.143012257 +0000 UTC m=+128.556974718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.736561 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xr84c container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.736705 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" podUID="2b370d45-15f6-4f78-90d8-f15bb7f31949" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.745909 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.746657 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.246630489 +0000 UTC m=+128.660592950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.761084 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2cp\" (UniqueName: \"kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp\") pod \"redhat-operators-qsx2f\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.848501 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:10 crc kubenswrapper[5099]: E0121 18:16:10.849392 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.34937446 +0000 UTC m=+128.763336921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.891612 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plm55\" (UniqueName: \"kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55\") pod \"redhat-operators-ncssr\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.909130 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:16:10 crc kubenswrapper[5099]: I0121 18:16:10.912087 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-q9l8j" podStartSLOduration=99.912056971 podStartE2EDuration="1m39.912056971s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:10.908356786 +0000 UTC m=+128.322319247" watchObservedRunningTime="2026-01-21 18:16:10.912056971 +0000 UTC m=+128.326019432" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.068382 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.069447 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.075911 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.575867612 +0000 UTC m=+128.989830073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.094843 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.179451 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.180051 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.680033759 +0000 UTC m=+129.093996220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.262903 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57478: no serving certificate available for the kubelet" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.299532 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.299757 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.799707357 +0000 UTC m=+129.213669818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.299946 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.300439 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:11.800432366 +0000 UTC m=+129.214394827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.314686 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" event={"ID":"0ffb7e64-0677-44ec-971d-fda3f9b87e2d","Type":"ContainerStarted","Data":"0a0483b508a6b1cc8e0b940dafdd19942ba88717c775917e38ba3dccb220946d"} Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.318097 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-lhgtf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.318163 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" podUID="ee64e319-d2fd-4a23-808e-a4ab684a16af" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.318266 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-48lzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.318401 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-48lzl" podUID="8855b7e4-a1e8-41ae-b995-832120b0bdcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.318512 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.321280 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.505354 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lxg2b container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.505436 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.506078 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.506478 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.00645717 +0000 UTC m=+129.420419641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.508961 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.530951 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.531014 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.537556 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:11 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:11 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:11 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.537624 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.616498 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.626049 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.126033055 +0000 UTC m=+129.539995516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.739214 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.760043 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.259919591 +0000 UTC m=+129.673882042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.760642 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xr84c container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.760854 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" podUID="2b370d45-15f6-4f78-90d8-f15bb7f31949" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.762505 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" podStartSLOduration=71.762488418 podStartE2EDuration="1m11.762488418s" podCreationTimestamp="2026-01-21 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:11.739667337 +0000 UTC m=+129.153629828" watchObservedRunningTime="2026-01-21 18:16:11.762488418 +0000 UTC m=+129.176450879" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.785571 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.798489 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.800525 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-48lzl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.800611 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-48lzl" podUID="8855b7e4-a1e8-41ae-b995-832120b0bdcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.831338 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.839565 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-6r5xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.839655 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6r5xz" podUID="cf468a9b-3840-46c0-8390-79ec278be1d0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.864768 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.868507 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.368475902 +0000 UTC m=+129.782438533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.966728 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:11 crc kubenswrapper[5099]: E0121 18:16:11.968141 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.468119841 +0000 UTC m=+129.882082302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.973394 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:16:11 crc kubenswrapper[5099]: I0121 18:16:11.982280 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-vgfbc" podStartSLOduration=100.982260707 podStartE2EDuration="1m40.982260707s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:11.982205046 +0000 UTC m=+129.396167507" watchObservedRunningTime="2026-01-21 18:16:11.982260707 +0000 UTC m=+129.396223168" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.070071 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.070496 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.570478991 +0000 UTC m=+129.984441452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.165555 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lkws9" podStartSLOduration=15.165516621 podStartE2EDuration="15.165516621s" podCreationTimestamp="2026-01-21 18:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.125427903 +0000 UTC m=+129.539390364" watchObservedRunningTime="2026-01-21 18:16:12.165516621 +0000 UTC m=+129.579479082" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.165814 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-2zxgx" podStartSLOduration=101.165808169 podStartE2EDuration="1m41.165808169s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.055547694 +0000 UTC m=+129.469510175" watchObservedRunningTime="2026-01-21 18:16:12.165808169 +0000 UTC m=+129.579770620" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.171559 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.171983 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.671963668 +0000 UTC m=+130.085926129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.273722 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.274143 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.774130793 +0000 UTC m=+130.188093254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.391175 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerStarted","Data":"d28cdc45a3e754c99a313f8576a1fa9e7e65a75e2f0ec0f6b89d51030e597d19"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.392278 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.392758 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:12.892723553 +0000 UTC m=+130.306686014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.425103 5099 generic.go:358] "Generic (PLEG): container finished" podID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerID="af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d" exitCode=0 Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.425247 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerDied","Data":"af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.453286 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerStarted","Data":"7fc6197adb8e4d14872dfc593fb723db00dded6a80ee129cb4ddb9642898d903"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.491026 5099 generic.go:358] "Generic (PLEG): container finished" podID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerID="1477e8efa11a8826f33dc15e74ed84d13b88e34d2abed6ecafe308ce2db75e91" exitCode=0 Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.491152 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerDied","Data":"1477e8efa11a8826f33dc15e74ed84d13b88e34d2abed6ecafe308ce2db75e91"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.503229 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.503646 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.003633474 +0000 UTC m=+130.417595935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.504890 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xr84c container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.504934 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" podUID="2b370d45-15f6-4f78-90d8-f15bb7f31949" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.508557 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:12 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:12 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:12 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.508607 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.534645 5099 generic.go:358] "Generic (PLEG): container finished" podID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerID="6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f" exitCode=0 Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.534841 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerDied","Data":"6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.560925 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerID="329f0d0d946768f6cdc3add419d1dc54ab9bb87976d3a87f6ba162d6d483a8b1" exitCode=0 Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.561053 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerDied","Data":"329f0d0d946768f6cdc3add419d1dc54ab9bb87976d3a87f6ba162d6d483a8b1"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.592882 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-lhgtf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.592971 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" podUID="ee64e319-d2fd-4a23-808e-a4ab684a16af" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.599414 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" event={"ID":"da3a0959-1a85-473a-95d5-51b77e30c5da","Type":"ContainerStarted","Data":"515963924af80c441c9901c3bd3a7d6a38ef61e9d249d455c1fc9625384a566f"} Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.599969 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-lhgtf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.600035 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" podUID="ee64e319-d2fd-4a23-808e-a4ab684a16af" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.608155 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.608997 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.108975308 +0000 UTC m=+130.522937779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.655324 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6zt7l" podStartSLOduration=101.655299593 podStartE2EDuration="1m41.655299593s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.451338381 +0000 UTC m=+129.865300852" watchObservedRunningTime="2026-01-21 18:16:12.655299593 +0000 UTC m=+130.069262054" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.655641 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2ttk" podStartSLOduration=101.655636312 podStartE2EDuration="1m41.655636312s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.651273604 +0000 UTC m=+130.065236075" watchObservedRunningTime="2026-01-21 18:16:12.655636312 +0000 UTC m=+130.069598773" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.713664 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.715393 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.215377268 +0000 UTC m=+130.629339939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.764223 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" podStartSLOduration=101.764207435 podStartE2EDuration="1m41.764207435s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.762760329 +0000 UTC m=+130.176722800" watchObservedRunningTime="2026-01-21 18:16:12.764207435 +0000 UTC m=+130.178169896" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.818558 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.818889 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.318850935 +0000 UTC m=+130.732813396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.909044 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" podStartSLOduration=101.909023965 podStartE2EDuration="1m41.909023965s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:12.870453291 +0000 UTC m=+130.284415752" watchObservedRunningTime="2026-01-21 18:16:12.909023965 +0000 UTC m=+130.322986426" Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.909526 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:16:12 crc kubenswrapper[5099]: I0121 18:16:12.920457 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:12 crc kubenswrapper[5099]: E0121 18:16:12.920842 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.420828346 +0000 UTC m=+130.834790797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.027477 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.027989 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.527968895 +0000 UTC m=+130.941931366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.134033 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.134374 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.634359825 +0000 UTC m=+131.048322286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.235132 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.235330 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.735295249 +0000 UTC m=+131.149257720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.235843 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.236169 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.73615466 +0000 UTC m=+131.150117121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.336858 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.337149 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.837133126 +0000 UTC m=+131.251095587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.337196 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.337535 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.837518166 +0000 UTC m=+131.251480627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.403231 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.438223 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.438645 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:13.938624985 +0000 UTC m=+131.352587456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.502940 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:13 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:13 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:13 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.503006 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.515631 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" podStartSLOduration=102.515611338 podStartE2EDuration="1m42.515611338s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:13.515016994 +0000 UTC m=+130.928979475" watchObservedRunningTime="2026-01-21 18:16:13.515611338 +0000 UTC m=+130.929573799" Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.540753 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.541137 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.041117498 +0000 UTC m=+131.455080019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.645836 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.646123 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.146106554 +0000 UTC m=+131.560069015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.718382 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerStarted","Data":"b60882b21035216a52804c70b1ad5ba9115012220fb1e376e9b6aaad14797df8"} Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.750790 5099 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-84k5t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]log ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]etcd ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/max-in-flight-filter ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 18:16:13 crc kubenswrapper[5099]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 18:16:13 crc kubenswrapper[5099]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 18:16:13 crc kubenswrapper[5099]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 18:16:13 crc kubenswrapper[5099]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 18:16:13 crc kubenswrapper[5099]: livez check failed Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.750867 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" podUID="0494dafa-d272-45bf-a11e-7ca78f92223d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.751665 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.752351 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.25233758 +0000 UTC m=+131.666300041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.755862 5099 generic.go:358] "Generic (PLEG): container finished" podID="ec86143c-2662-474d-857f-b54aee6207b0" containerID="432bc00de22f2ddaa0286e5140666815be5e781c63991ef00ae26121c59d6c2d" exitCode=0 Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.755946 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerDied","Data":"432bc00de22f2ddaa0286e5140666815be5e781c63991ef00ae26121c59d6c2d"} Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.801247 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" event={"ID":"d0d4f813-c328-441b-963b-5241f73f9da2","Type":"ContainerStarted","Data":"2cd13a8c5c4a0826ecc7b61252802f10da995456ad9b13a58a9965c3c643faeb"} Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.853831 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.855330 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.355304875 +0000 UTC m=+131.769267346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.926497 5099 generic.go:358] "Generic (PLEG): container finished" podID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerID="0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc" exitCode=0 Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.926694 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerDied","Data":"0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc"} Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.960523 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerStarted","Data":"779b3186531e80fb41012a82cf8100eafab73292ccf5a446c821271aee7a9429"} Jan 21 18:16:13 crc kubenswrapper[5099]: I0121 18:16:13.965361 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:13 crc kubenswrapper[5099]: E0121 18:16:13.966855 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.466810261 +0000 UTC m=+131.880772722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.060774 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" podStartSLOduration=103.060756943 podStartE2EDuration="1m43.060756943s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:13.698436958 +0000 UTC m=+131.112399419" watchObservedRunningTime="2026-01-21 18:16:14.060756943 +0000 UTC m=+131.474719404" Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.066897 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.067198 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.567135181 +0000 UTC m=+131.981097652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.068064 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.070236 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.570212147 +0000 UTC m=+131.984174608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.170765 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.171266 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.671238734 +0000 UTC m=+132.085201195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.273257 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.273652 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.773633395 +0000 UTC m=+132.187595856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.323262 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57492: no serving certificate available for the kubelet" Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.378440 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.378982 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.878959139 +0000 UTC m=+132.292921600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.379076 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-xfrc5" podStartSLOduration=103.379058401 podStartE2EDuration="1m43.379058401s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:14.377148824 +0000 UTC m=+131.791111285" watchObservedRunningTime="2026-01-21 18:16:14.379058401 +0000 UTC m=+131.793020852" Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.485627 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.486010 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:14.985995874 +0000 UTC m=+132.399958335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.565186 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:14 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:14 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:14 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.565324 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.615791 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.616255 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.116229263 +0000 UTC m=+132.530191724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.720870 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.721850 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.221830773 +0000 UTC m=+132.635793234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.790466 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.822126 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.822658 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.322640275 +0000 UTC m=+132.736602726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.908779 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:14 crc kubenswrapper[5099]: I0121 18:16:14.930028 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.930415 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.430401319 +0000 UTC m=+132.844363780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:14 crc kubenswrapper[5099]: E0121 18:16:14.935929 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.000912 5099 generic.go:358] "Generic (PLEG): container finished" podID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerID="d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b" exitCode=0 Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.002647 5099 generic.go:358] "Generic (PLEG): container finished" podID="4202775a-8750-4d76-ad90-6a5703048787" containerID="c590809ce905abe3a59ed4b7c7f7aff98fc18ba35b34b7a0f0f584f3257b4fcd" exitCode=0 Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.033563 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.033691 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.533668641 +0000 UTC m=+132.947631102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.034123 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.034592 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.534581524 +0000 UTC m=+132.948543985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.135583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.136233 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.636209526 +0000 UTC m=+133.050171987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.188123 5099 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-84k5t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]log ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]etcd ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/max-in-flight-filter ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 18:16:15 crc kubenswrapper[5099]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/openshift.io-startinformers ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 18:16:15 crc kubenswrapper[5099]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 18:16:15 crc kubenswrapper[5099]: livez check failed Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.188231 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" podUID="0494dafa-d272-45bf-a11e-7ca78f92223d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.196771 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.218936 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.219020 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.242184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.242563 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.742550794 +0000 UTC m=+133.156513255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.347113 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.347723 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.847706113 +0000 UTC m=+133.261668574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.449548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.450004 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:15.949981112 +0000 UTC m=+133.363943573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.496870 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:15 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:15 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:15 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.496964 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.550715 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.551082 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.051063881 +0000 UTC m=+133.465026342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.652375 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.652850 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.152835886 +0000 UTC m=+133.566798347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.753994 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.754205 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.254166221 +0000 UTC m=+133.668128682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.754959 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.755392 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.255364421 +0000 UTC m=+133.669326872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.857148 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.857390 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.357357921 +0000 UTC m=+133.771320382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.857841 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.858258 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.358250914 +0000 UTC m=+133.772213375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:15 crc kubenswrapper[5099]: I0121 18:16:15.958852 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:15 crc kubenswrapper[5099]: E0121 18:16:15.959251 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.45923472 +0000 UTC m=+133.873197181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.060409 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.060846 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.560833092 +0000 UTC m=+133.974795553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.162965 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.163072 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.663055788 +0000 UTC m=+134.077018249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.163310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.163989 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.66396156 +0000 UTC m=+134.077924011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.265254 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.265457 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.765423918 +0000 UTC m=+134.179386379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.266279 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.266763 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.766756211 +0000 UTC m=+134.180718672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.367685 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.367905 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.86786748 +0000 UTC m=+134.281829951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.368524 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.368855 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.868840875 +0000 UTC m=+134.282803336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.469608 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.469847 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.969827111 +0000 UTC m=+134.383789572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.469905 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.470337 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:16.970315423 +0000 UTC m=+134.384277884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.501816 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:16 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:16 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:16 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.501900 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.570978 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.07095925 +0000 UTC m=+134.484921711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.570980 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.571378 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.571892 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.071872843 +0000 UTC m=+134.485835304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.672333 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.672533 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.172499611 +0000 UTC m=+134.586462072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.672991 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.673310 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.17329092 +0000 UTC m=+134.587253441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.774798 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.775362 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.275342642 +0000 UTC m=+134.689305103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.877204 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.877836 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.377816855 +0000 UTC m=+134.791779316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:16 crc kubenswrapper[5099]: I0121 18:16:16.979223 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:16 crc kubenswrapper[5099]: E0121 18:16:16.979984 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.47995418 +0000 UTC m=+134.893916641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.090763 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.091624 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.591581439 +0000 UTC m=+135.005544080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.193803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.193945 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.693909528 +0000 UTC m=+135.107872009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.194412 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.195285 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.695273392 +0000 UTC m=+135.109235853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.372352 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.372516 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.372571 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.375329 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.875284402 +0000 UTC m=+135.289246873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.375514 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.375889 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.875872356 +0000 UTC m=+135.289834867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.377286 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.377301 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387477 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerDied","Data":"d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b"} Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387529 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerDied","Data":"c590809ce905abe3a59ed4b7c7f7aff98fc18ba35b34b7a0f0f584f3257b4fcd"} Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387588 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-lhgtf" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387660 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387675 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.387693 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.397997 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-btpkr" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.398033 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.398421 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.405490 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.410058 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.476726 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.477860 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.477903 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.478014 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.478112 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.478160 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.978098193 +0000 UTC m=+135.392060644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.478213 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.479216 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:17.97919191 +0000 UTC m=+135.393154371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.508070 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:17 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:17 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:17 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.508151 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.580350 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.581111 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.581148 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.581380 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.581413 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.582080 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.082058223 +0000 UTC m=+135.496020684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.582129 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.582175 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.623126 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.642180 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.684663 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.685212 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.185191072 +0000 UTC m=+135.599153533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.720763 5099 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.786776 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.787525 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.287507041 +0000 UTC m=+135.701469502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.812751 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.826233 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.889214 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.889659 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.389638686 +0000 UTC m=+135.803601147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:17 crc kubenswrapper[5099]: I0121 18:16:17.991011 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:17 crc kubenswrapper[5099]: E0121 18:16:17.992006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.491988155 +0000 UTC m=+135.905950616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.070496 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" event={"ID":"d0d4f813-c328-441b-963b-5241f73f9da2","Type":"ContainerStarted","Data":"97cc7096a0d47c1a440dd8d23ec33248b4f9ca2d01711552241bed20e88a004f"} Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.070576 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" event={"ID":"d0d4f813-c328-441b-963b-5241f73f9da2","Type":"ContainerStarted","Data":"cf1160cc4e66b281434f9b0c5e93c15f5e9572447d2e199e870cd1187a02757b"} Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.077020 5099 generic.go:358] "Generic (PLEG): container finished" podID="05e481c5-0ad1-4c76-bf43-a32b82b763c7" containerID="462e144af1c83a94a48d6dab7d1525a6ce5af7900773a8993d3a5ed757c3fc9e" exitCode=0 Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.077359 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" event={"ID":"05e481c5-0ad1-4c76-bf43-a32b82b763c7","Type":"ContainerDied","Data":"462e144af1c83a94a48d6dab7d1525a6ce5af7900773a8993d3a5ed757c3fc9e"} Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.094643 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:18 crc kubenswrapper[5099]: E0121 18:16:18.098023 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.598001316 +0000 UTC m=+136.011963977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.198309 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:18 crc kubenswrapper[5099]: E0121 18:16:18.198779 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.698747356 +0000 UTC m=+136.112709817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.281235 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.299978 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:18 crc kubenswrapper[5099]: E0121 18:16:18.300466 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.800442099 +0000 UTC m=+136.214404560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tjl2r" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.302446 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 18:16:18 crc kubenswrapper[5099]: W0121 18:16:18.327981 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2e565af4_763d_4ad6_bebf_13f785bd61ad.slice/crio-7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75 WatchSource:0}: Error finding container 7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75: Status 404 returned error can't find the container with id 7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75 Jan 21 18:16:18 crc kubenswrapper[5099]: W0121 18:16:18.337573 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod64e6f34d_28ce_47dd_9279_1c5fc5fb8823.slice/crio-2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913 WatchSource:0}: Error finding container 2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913: Status 404 returned error can't find the container with id 2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913 Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.378033 5099 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T18:16:17.720796002Z","UUID":"9722f842-3610-401f-827d-b763ab77aca2","Handler":null,"Name":"","Endpoint":""} Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.441602 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:18 crc kubenswrapper[5099]: E0121 18:16:18.442051 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 18:16:18.942029529 +0000 UTC m=+136.355991990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.497812 5099 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.497891 5099 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.499697 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:18 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:18 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:18 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.499798 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.543400 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.546875 5099 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.546982 5099 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:18 crc kubenswrapper[5099]: I0121 18:16:18.604039 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lkws9" Jan 21 18:16:19 crc kubenswrapper[5099]: I0121 18:16:19.092921 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"2e565af4-763d-4ad6-bebf-13f785bd61ad","Type":"ContainerStarted","Data":"7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75"} Jan 21 18:16:19 crc kubenswrapper[5099]: I0121 18:16:19.094184 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"64e6f34d-28ce-47dd-9279-1c5fc5fb8823","Type":"ContainerStarted","Data":"2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913"} Jan 21 18:16:19 crc kubenswrapper[5099]: I0121 18:16:19.497651 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:19 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:19 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:19 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:19 crc kubenswrapper[5099]: I0121 18:16:19.498087 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34582: no serving certificate available for the kubelet" Jan 21 18:16:19 crc kubenswrapper[5099]: I0121 18:16:19.498154 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.167631 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.285485 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.285572 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.288583 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.288801 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.410585 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zlnm\" (UniqueName: \"kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm\") pod \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.410808 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume\") pod \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.410832 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume\") pod \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\" (UID: \"05e481c5-0ad1-4c76-bf43-a32b82b763c7\") " Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.411972 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "05e481c5-0ad1-4c76-bf43-a32b82b763c7" (UID: "05e481c5-0ad1-4c76-bf43-a32b82b763c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.418835 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm" (OuterVolumeSpecName: "kube-api-access-5zlnm") pod "05e481c5-0ad1-4c76-bf43-a32b82b763c7" (UID: "05e481c5-0ad1-4c76-bf43-a32b82b763c7"). InnerVolumeSpecName "kube-api-access-5zlnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.420465 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "05e481c5-0ad1-4c76-bf43-a32b82b763c7" (UID: "05e481c5-0ad1-4c76-bf43-a32b82b763c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.498827 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:20 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:20 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:20 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.498897 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.512912 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05e481c5-0ad1-4c76-bf43-a32b82b763c7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.512956 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05e481c5-0ad1-4c76-bf43-a32b82b763c7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.512969 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zlnm\" (UniqueName: \"kubernetes.io/projected/05e481c5-0ad1-4c76-bf43-a32b82b763c7-kube-api-access-5zlnm\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:20 crc kubenswrapper[5099]: I0121 18:16:20.747298 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xr84c" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.126000 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.126036 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt" event={"ID":"05e481c5-0ad1-4c76-bf43-a32b82b763c7","Type":"ContainerDied","Data":"c296b4466ceb446f4719719f2e61cb4b606acf9ee809a9cfe3bd2c9c479a8854"} Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.126076 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c296b4466ceb446f4719719f2e61cb4b606acf9ee809a9cfe3bd2c9c479a8854" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.319088 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-48lzl" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.496719 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:21 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:21 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:21 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.496803 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.530123 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.530230 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.775653 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-6r5xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 18:16:21 crc kubenswrapper[5099]: I0121 18:16:21.775789 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6r5xz" podUID="cf468a9b-3840-46c0-8390-79ec278be1d0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 18:16:22 crc kubenswrapper[5099]: I0121 18:16:22.496163 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:22 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:22 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:22 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:22 crc kubenswrapper[5099]: I0121 18:16:22.496236 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:23 crc kubenswrapper[5099]: I0121 18:16:23.501834 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:23 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:23 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:23 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:23 crc kubenswrapper[5099]: I0121 18:16:23.509065 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.496748 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:24 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:24 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:24 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.496862 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.568345 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tjl2r\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.588268 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.617140 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 18:16:24 crc kubenswrapper[5099]: E0121 18:16:24.867791 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:24 crc kubenswrapper[5099]: E0121 18:16:24.870200 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:24 crc kubenswrapper[5099]: E0121 18:16:24.871728 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:24 crc kubenswrapper[5099]: E0121 18:16:24.871821 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.878007 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 18:16:24 crc kubenswrapper[5099]: I0121 18:16:24.886359 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:25 crc kubenswrapper[5099]: E0121 18:16:25.006684 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.178650 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-84k5t" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.480614 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.480696 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.480767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.480798 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.482839 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.483186 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.483729 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.493721 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.496338 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.499635 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.501528 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:25 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:25 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:25 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.501600 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.507590 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.507842 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.628327 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.635727 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.694570 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.696986 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.722401 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d26f0ad-829f-4f64-82b5-1292bd2316f0-metrics-certs\") pod \"network-metrics-daemon-tsdhb\" (UID: \"0d26f0ad-829f-4f64-82b5-1292bd2316f0\") " pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.741219 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.745769 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 18:16:25 crc kubenswrapper[5099]: I0121 18:16:25.749953 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tsdhb" Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.094217 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.191326 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" event={"ID":"d0d4f813-c328-441b-963b-5241f73f9da2","Type":"ContainerStarted","Data":"79cfb2d5b7921b70d46cab4dad4914076657892dcdf9616e6a76408b660ef443"} Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.196572 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"2e565af4-763d-4ad6-bebf-13f785bd61ad","Type":"ContainerStarted","Data":"e00ecf85eb8a3a33203ec7ff8dd1e5c635967a3cdca2b4e0ee8a04f7c68f6f8e"} Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.275623 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-r6vwz" podStartSLOduration=28.275599579 podStartE2EDuration="28.275599579s" podCreationTimestamp="2026-01-21 18:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:26.267600382 +0000 UTC m=+143.681562853" watchObservedRunningTime="2026-01-21 18:16:26.275599579 +0000 UTC m=+143.689562040" Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.294180 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=10.294152329 podStartE2EDuration="10.294152329s" podCreationTimestamp="2026-01-21 18:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:26.291564325 +0000 UTC m=+143.705526786" watchObservedRunningTime="2026-01-21 18:16:26.294152329 +0000 UTC m=+143.708114790" Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.497172 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:26 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:26 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:26 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:26 crc kubenswrapper[5099]: I0121 18:16:26.497259 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:27 crc kubenswrapper[5099]: I0121 18:16:27.497867 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:27 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:27 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:27 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:27 crc kubenswrapper[5099]: I0121 18:16:27.498868 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:28 crc kubenswrapper[5099]: I0121 18:16:28.498799 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:28 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:28 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:28 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:28 crc kubenswrapper[5099]: I0121 18:16:28.498966 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:29 crc kubenswrapper[5099]: I0121 18:16:29.507664 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:29 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:29 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:29 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:29 crc kubenswrapper[5099]: I0121 18:16:29.507817 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:29 crc kubenswrapper[5099]: I0121 18:16:29.894632 5099 ???:1] "http: TLS handshake error from 192.168.126.11:54896: no serving certificate available for the kubelet" Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.239569 5099 generic.go:358] "Generic (PLEG): container finished" podID="2e565af4-763d-4ad6-bebf-13f785bd61ad" containerID="e00ecf85eb8a3a33203ec7ff8dd1e5c635967a3cdca2b4e0ee8a04f7c68f6f8e" exitCode=0 Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.239697 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"2e565af4-763d-4ad6-bebf-13f785bd61ad","Type":"ContainerDied","Data":"e00ecf85eb8a3a33203ec7ff8dd1e5c635967a3cdca2b4e0ee8a04f7c68f6f8e"} Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.285848 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.285945 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.500978 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:30 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:30 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:30 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:30 crc kubenswrapper[5099]: I0121 18:16:30.501101 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.497663 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:31 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:31 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:31 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.497778 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.529641 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.529706 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.529764 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.533850 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"589a5c693a83d931b05c6c81c33f74e130efe642df03d2df2c5e21874f113621"} pod="openshift-console/downloads-747b44746d-zlpql" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.535238 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" containerID="cri-o://589a5c693a83d931b05c6c81c33f74e130efe642df03d2df2c5e21874f113621" gracePeriod=2 Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.536109 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.536183 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.918571 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-6r5xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 18:16:31 crc kubenswrapper[5099]: I0121 18:16:31.918693 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6r5xz" podUID="cf468a9b-3840-46c0-8390-79ec278be1d0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 18:16:32 crc kubenswrapper[5099]: I0121 18:16:32.497038 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:32 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:32 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:32 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:32 crc kubenswrapper[5099]: I0121 18:16:32.497178 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:33 crc kubenswrapper[5099]: I0121 18:16:33.496997 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:33 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:33 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:33 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:33 crc kubenswrapper[5099]: I0121 18:16:33.497088 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:34 crc kubenswrapper[5099]: I0121 18:16:34.497398 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:34 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:34 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:34 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:34 crc kubenswrapper[5099]: I0121 18:16:34.497528 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:34 crc kubenswrapper[5099]: E0121 18:16:34.867399 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:34 crc kubenswrapper[5099]: E0121 18:16:34.869991 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:34 crc kubenswrapper[5099]: E0121 18:16:34.871716 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:34 crc kubenswrapper[5099]: E0121 18:16:34.871935 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 18:16:35 crc kubenswrapper[5099]: E0121 18:16:35.159219 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:35 crc kubenswrapper[5099]: I0121 18:16:35.505061 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:35 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:35 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:35 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:35 crc kubenswrapper[5099]: I0121 18:16:35.505146 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:36 crc kubenswrapper[5099]: I0121 18:16:36.378139 5099 generic.go:358] "Generic (PLEG): container finished" podID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerID="589a5c693a83d931b05c6c81c33f74e130efe642df03d2df2c5e21874f113621" exitCode=0 Jan 21 18:16:36 crc kubenswrapper[5099]: I0121 18:16:36.380154 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zlpql" event={"ID":"bee88171-b2f0-49bb-92aa-8a0d79d87cb7","Type":"ContainerDied","Data":"589a5c693a83d931b05c6c81c33f74e130efe642df03d2df2c5e21874f113621"} Jan 21 18:16:36 crc kubenswrapper[5099]: I0121 18:16:36.498410 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:36 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:36 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:36 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:36 crc kubenswrapper[5099]: I0121 18:16:36.498541 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:37 crc kubenswrapper[5099]: I0121 18:16:37.653160 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:37 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:37 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:37 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:37 crc kubenswrapper[5099]: I0121 18:16:37.653319 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:38 crc kubenswrapper[5099]: I0121 18:16:38.496546 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:38 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:38 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:38 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:38 crc kubenswrapper[5099]: I0121 18:16:38.496629 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:39 crc kubenswrapper[5099]: I0121 18:16:39.497514 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:39 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:39 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:39 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:39 crc kubenswrapper[5099]: I0121 18:16:39.497654 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:40 crc kubenswrapper[5099]: I0121 18:16:40.415609 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zjwsj_e0db719c-cb3c-4c7d-ab76-20a341a011e6/kube-multus-additional-cni-plugins/0.log" Jan 21 18:16:40 crc kubenswrapper[5099]: I0121 18:16:40.416133 5099 generic.go:358] "Generic (PLEG): container finished" podID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" exitCode=137 Jan 21 18:16:40 crc kubenswrapper[5099]: I0121 18:16:40.416274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" event={"ID":"e0db719c-cb3c-4c7d-ab76-20a341a011e6","Type":"ContainerDied","Data":"f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a"} Jan 21 18:16:40 crc kubenswrapper[5099]: I0121 18:16:40.497327 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:40 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:40 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:40 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:40 crc kubenswrapper[5099]: I0121 18:16:40.497441 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.498070 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:41 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:41 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:41 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.498231 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.531193 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.531292 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.775725 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-6r5xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 18:16:41 crc kubenswrapper[5099]: I0121 18:16:41.775867 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6r5xz" podUID="cf468a9b-3840-46c0-8390-79ec278be1d0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 18:16:42 crc kubenswrapper[5099]: I0121 18:16:42.497386 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:42 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:42 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:42 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:42 crc kubenswrapper[5099]: I0121 18:16:42.497541 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:42 crc kubenswrapper[5099]: I0121 18:16:42.603061 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-p9ggs" Jan 21 18:16:43 crc kubenswrapper[5099]: I0121 18:16:43.496777 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-lqqhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 18:16:43 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 21 18:16:43 crc kubenswrapper[5099]: [+]process-running ok Jan 21 18:16:43 crc kubenswrapper[5099]: healthz check failed Jan 21 18:16:43 crc kubenswrapper[5099]: I0121 18:16:43.496853 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" podUID="ad44bdbe-5009-4b21-ad83-21185ec2d86d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 18:16:44 crc kubenswrapper[5099]: I0121 18:16:44.498123 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:44 crc kubenswrapper[5099]: I0121 18:16:44.501120 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-lqqhp" Jan 21 18:16:44 crc kubenswrapper[5099]: E0121 18:16:44.865887 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a is running failed: container process not found" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:44 crc kubenswrapper[5099]: E0121 18:16:44.866541 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a is running failed: container process not found" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:44 crc kubenswrapper[5099]: E0121 18:16:44.867059 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a is running failed: container process not found" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 18:16:44 crc kubenswrapper[5099]: E0121 18:16:44.867107 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 18:16:45 crc kubenswrapper[5099]: E0121 18:16:45.364452 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.914514 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.916120 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05e481c5-0ad1-4c76-bf43-a32b82b763c7" containerName="collect-profiles" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.916145 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e481c5-0ad1-4c76-bf43-a32b82b763c7" containerName="collect-profiles" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.916321 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="05e481c5-0ad1-4c76-bf43-a32b82b763c7" containerName="collect-profiles" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.921319 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.932285 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.945524 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:49 crc kubenswrapper[5099]: I0121 18:16:49.945583 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.046582 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.046649 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.046795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.070222 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.256159 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.399971 5099 ???:1] "http: TLS handshake error from 192.168.126.11:59652: no serving certificate available for the kubelet" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.527720 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"2e565af4-763d-4ad6-bebf-13f785bd61ad","Type":"ContainerDied","Data":"7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75"} Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.527792 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b4c7f5ebd2dc36c363323dd24c51da0db94fe83e1525d4fe75675e9d4e1aa75" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.535384 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.559142 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir\") pod \"2e565af4-763d-4ad6-bebf-13f785bd61ad\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.559252 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access\") pod \"2e565af4-763d-4ad6-bebf-13f785bd61ad\" (UID: \"2e565af4-763d-4ad6-bebf-13f785bd61ad\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.561755 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e565af4-763d-4ad6-bebf-13f785bd61ad" (UID: "2e565af4-763d-4ad6-bebf-13f785bd61ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.562327 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e565af4-763d-4ad6-bebf-13f785bd61ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.594394 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e565af4-763d-4ad6-bebf-13f785bd61ad" (UID: "2e565af4-763d-4ad6-bebf-13f785bd61ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.663549 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e565af4-763d-4ad6-bebf-13f785bd61ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.744103 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zjwsj_e0db719c-cb3c-4c7d-ab76-20a341a011e6/kube-multus-additional-cni-plugins/0.log" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.744187 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready\") pod \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765165 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmnx\" (UniqueName: \"kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx\") pod \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765322 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist\") pod \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765356 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir\") pod \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\" (UID: \"e0db719c-cb3c-4c7d-ab76-20a341a011e6\") " Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765689 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "e0db719c-cb3c-4c7d-ab76-20a341a011e6" (UID: "e0db719c-cb3c-4c7d-ab76-20a341a011e6"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.765886 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready" (OuterVolumeSpecName: "ready") pod "e0db719c-cb3c-4c7d-ab76-20a341a011e6" (UID: "e0db719c-cb3c-4c7d-ab76-20a341a011e6"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.766340 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "e0db719c-cb3c-4c7d-ab76-20a341a011e6" (UID: "e0db719c-cb3c-4c7d-ab76-20a341a011e6"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.772614 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx" (OuterVolumeSpecName: "kube-api-access-glmnx") pod "e0db719c-cb3c-4c7d-ab76-20a341a011e6" (UID: "e0db719c-cb3c-4c7d-ab76-20a341a011e6"). InnerVolumeSpecName "kube-api-access-glmnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.867816 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0db719c-cb3c-4c7d-ab76-20a341a011e6-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.867866 5099 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0db719c-cb3c-4c7d-ab76-20a341a011e6-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.867882 5099 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e0db719c-cb3c-4c7d-ab76-20a341a011e6-ready\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:50 crc kubenswrapper[5099]: I0121 18:16:50.867893 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-glmnx\" (UniqueName: \"kubernetes.io/projected/e0db719c-cb3c-4c7d-ab76-20a341a011e6-kube-api-access-glmnx\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.050973 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.442335 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tsdhb"] Jan 21 18:16:51 crc kubenswrapper[5099]: W0121 18:16:51.468878 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d26f0ad_829f_4f64_82b5_1292bd2316f0.slice/crio-b6c8c1c23cb4a564917921b1f5e635f25fe587e5b50085c44670962b4de7b160 WatchSource:0}: Error finding container b6c8c1c23cb4a564917921b1f5e635f25fe587e5b50085c44670962b4de7b160: Status 404 returned error can't find the container with id b6c8c1c23cb4a564917921b1f5e635f25fe587e5b50085c44670962b4de7b160 Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.530940 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.531034 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.547886 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerStarted","Data":"d10e7b5ede91b4ac4524ed46b4972484ecd78e91e91cc21c2bfe49085d73cb41"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.563265 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zjwsj_e0db719c-cb3c-4c7d-ab76-20a341a011e6/kube-multus-additional-cni-plugins/0.log" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.563481 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" event={"ID":"e0db719c-cb3c-4c7d-ab76-20a341a011e6","Type":"ContainerDied","Data":"2bcaca77323bcd511b8997b22a3a2d0da33edaf8b87c5eed369695473a8a4798"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.563577 5099 scope.go:117] "RemoveContainer" containerID="f7db3ee6f61c5c01e7605630be93c8b733c80c9c6c756304b44f3630f68f632a" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.563901 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zjwsj" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.600819 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerStarted","Data":"3e06708a1a9ba11fd3086c04396656520a0f1346f752e90c3e7008aa3bb39bfa"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.606247 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"64e6f34d-28ce-47dd-9279-1c5fc5fb8823","Type":"ContainerStarted","Data":"3eef0c81df494a34e73311616f1a9700518756475bc9deef53488ae2caa0b1e8"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.640379 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" event={"ID":"0d26f0ad-829f-4f64-82b5-1292bd2316f0","Type":"ContainerStarted","Data":"b6c8c1c23cb4a564917921b1f5e635f25fe587e5b50085c44670962b4de7b160"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.647176 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerStarted","Data":"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.667626 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=37.667583378 podStartE2EDuration="37.667583378s" podCreationTimestamp="2026-01-21 18:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:51.657413536 +0000 UTC m=+169.071375987" watchObservedRunningTime="2026-01-21 18:16:51.667583378 +0000 UTC m=+169.081545839" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.736213 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zlpql" event={"ID":"bee88171-b2f0-49bb-92aa-8a0d79d87cb7","Type":"ContainerStarted","Data":"aad395e41107c3b1744378e58447fc90b05159be1f02ccd71718941dadf51d92"} Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.737623 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.741072 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.741232 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.764638 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zjwsj"] Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.765545 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zjwsj"] Jan 21 18:16:51 crc kubenswrapper[5099]: I0121 18:16:51.808511 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.242030 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" path="/var/lib/kubelet/pods/e0db719c-cb3c-4c7d-ab76-20a341a011e6/volumes" Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.242615 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.242634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" event={"ID":"90ce37a0-d38f-4712-89f0-8572a04c303d","Type":"ContainerStarted","Data":"20a0a30e793feb11419721e34e5f638fc2dccd9cbfcfe4e7f600de83788284f9"} Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.317687 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerStarted","Data":"b6e1d37071ba7f36cdda930b077cd2704f28df04c275221a52e8739d1cab337f"} Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.317805 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 18:16:52 crc kubenswrapper[5099]: I0121 18:16:52.380461 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-6r5xz" Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.414277 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerDied","Data":"67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.415810 5099 generic.go:358] "Generic (PLEG): container finished" podID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerID="67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.440123 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"09cccc34363c12a96768b7ce3495c4bac74b69d6af5ed039171d1a8c09bd0522"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.470264 5099 generic.go:358] "Generic (PLEG): container finished" podID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerID="6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.470472 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerDied","Data":"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.470522 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerStarted","Data":"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.486490 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" event={"ID":"90ce37a0-d38f-4712-89f0-8572a04c303d","Type":"ContainerStarted","Data":"5bd603a21814f9bf4bef85f84ca0bba031d42b79c8a0b15414fea6e193421340"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.487035 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.490812 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a48d470e37c8b880b858ed2191e35af446e7e98f6c4d8de16fb4bac920873324"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.495854 5099 generic.go:358] "Generic (PLEG): container finished" podID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerID="b6e1d37071ba7f36cdda930b077cd2704f28df04c275221a52e8739d1cab337f" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.496291 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerDied","Data":"b6e1d37071ba7f36cdda930b077cd2704f28df04c275221a52e8739d1cab337f"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.507137 5099 generic.go:358] "Generic (PLEG): container finished" podID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerID="d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.507324 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerDied","Data":"d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.509939 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gg9l" podStartSLOduration=8.725030087 podStartE2EDuration="45.509916685s" podCreationTimestamp="2026-01-21 18:16:08 +0000 UTC" firstStartedPulling="2026-01-21 18:16:13.928405472 +0000 UTC m=+131.342367933" lastFinishedPulling="2026-01-21 18:16:50.71329207 +0000 UTC m=+168.127254531" observedRunningTime="2026-01-21 18:16:53.504922363 +0000 UTC m=+170.918884824" watchObservedRunningTime="2026-01-21 18:16:53.509916685 +0000 UTC m=+170.923879146" Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.531438 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerStarted","Data":"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.554508 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d46bbfd2-40cb-40b4-b894-e9337f575676","Type":"ContainerStarted","Data":"cd610bb6f640b25314de4fb0b5a72ddf8216e16af7ead7129150d0ff2b934ef2"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.592661 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" podStartSLOduration=142.59262228 podStartE2EDuration="2m22.59262228s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:53.534121925 +0000 UTC m=+170.948084386" watchObservedRunningTime="2026-01-21 18:16:53.59262228 +0000 UTC m=+171.006584741" Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.606673 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerID="d10e7b5ede91b4ac4524ed46b4972484ecd78e91e91cc21c2bfe49085d73cb41" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.606826 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerDied","Data":"d10e7b5ede91b4ac4524ed46b4972484ecd78e91e91cc21c2bfe49085d73cb41"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.640614 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerStarted","Data":"b0a6aad0aacb2963dd316a565b918980130bf5a8cd0ed2199e65cf64fa7511d9"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.652237 5099 generic.go:358] "Generic (PLEG): container finished" podID="ec86143c-2662-474d-857f-b54aee6207b0" containerID="3e06708a1a9ba11fd3086c04396656520a0f1346f752e90c3e7008aa3bb39bfa" exitCode=0 Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.652380 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerDied","Data":"3e06708a1a9ba11fd3086c04396656520a0f1346f752e90c3e7008aa3bb39bfa"} Jan 21 18:16:53 crc kubenswrapper[5099]: I0121 18:16:53.671018 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"c11ff5773debd69d69f243d59e32f961c1baabb55498856a069b8774cb04e3dd"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.138188 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.138632 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.138771 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.678790 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b87510c147429756a0333eff6f2bf99c3c76eb5f66bcb2ff08eb3361ff2a6c69"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.686148 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerStarted","Data":"78ae5115408c6dcccb3a4adb49a3bc3aed747695abc5fb33a2bd87b9ee88db5d"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.691819 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerStarted","Data":"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.695120 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d46bbfd2-40cb-40b4-b894-e9337f575676","Type":"ContainerStarted","Data":"bf1e5b97f5f9b0fa7e32067fa5e90429d7d5bc4306eb55141a1f706d7b817c59"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.697705 5099 generic.go:358] "Generic (PLEG): container finished" podID="64e6f34d-28ce-47dd-9279-1c5fc5fb8823" containerID="3eef0c81df494a34e73311616f1a9700518756475bc9deef53488ae2caa0b1e8" exitCode=0 Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.697840 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"64e6f34d-28ce-47dd-9279-1c5fc5fb8823","Type":"ContainerDied","Data":"3eef0c81df494a34e73311616f1a9700518756475bc9deef53488ae2caa0b1e8"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.701514 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" event={"ID":"0d26f0ad-829f-4f64-82b5-1292bd2316f0","Type":"ContainerStarted","Data":"03c9f9f400a5da27ae53b50759ce35e95ec35a7a836635d6ce10cd27cac73ac0"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.701674 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tsdhb" event={"ID":"0d26f0ad-829f-4f64-82b5-1292bd2316f0","Type":"ContainerStarted","Data":"c3eac020037832de27c9608a11d810625c377e9099827a54af2e51a6a36f67e9"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.711787 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"7eed97a1c4b76d759f4018ecf10ea7089be67d69d6c01a542aada269841813d6"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.719267 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerStarted","Data":"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.725445 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"76789dc800fb3b0158dd50638c8fa6e8cdd2646139116a68b917badb337e018c"} Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.729654 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:16:54 crc kubenswrapper[5099]: I0121 18:16:54.730195 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.019954 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9zdl" podStartSLOduration=10.791044546 podStartE2EDuration="49.019921761s" podCreationTimestamp="2026-01-21 18:16:06 +0000 UTC" firstStartedPulling="2026-01-21 18:16:12.492097295 +0000 UTC m=+129.906059756" lastFinishedPulling="2026-01-21 18:16:50.72097451 +0000 UTC m=+168.134936971" observedRunningTime="2026-01-21 18:16:54.954540074 +0000 UTC m=+172.368502545" watchObservedRunningTime="2026-01-21 18:16:55.019921761 +0000 UTC m=+172.433884232" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.022436 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=6.022420633 podStartE2EDuration="6.022420633s" podCreationTimestamp="2026-01-21 18:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:55.007594996 +0000 UTC m=+172.421557467" watchObservedRunningTime="2026-01-21 18:16:55.022420633 +0000 UTC m=+172.436383094" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.069691 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tsdhb" podStartSLOduration=144.06966802 podStartE2EDuration="2m24.06966802s" podCreationTimestamp="2026-01-21 18:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:16:55.065812935 +0000 UTC m=+172.479775396" watchObservedRunningTime="2026-01-21 18:16:55.06966802 +0000 UTC m=+172.483630481" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.135122 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xdblj" podStartSLOduration=10.797383906 podStartE2EDuration="49.135098197s" podCreationTimestamp="2026-01-21 18:16:06 +0000 UTC" firstStartedPulling="2026-01-21 18:16:12.426270931 +0000 UTC m=+129.840233392" lastFinishedPulling="2026-01-21 18:16:50.763985222 +0000 UTC m=+168.177947683" observedRunningTime="2026-01-21 18:16:55.091895779 +0000 UTC m=+172.505858240" watchObservedRunningTime="2026-01-21 18:16:55.135098197 +0000 UTC m=+172.549060658" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.135266 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fsvr" podStartSLOduration=10.890785912 podStartE2EDuration="49.135260291s" podCreationTimestamp="2026-01-21 18:16:06 +0000 UTC" firstStartedPulling="2026-01-21 18:16:12.535862198 +0000 UTC m=+129.949824659" lastFinishedPulling="2026-01-21 18:16:50.780336577 +0000 UTC m=+168.194299038" observedRunningTime="2026-01-21 18:16:55.132639636 +0000 UTC m=+172.546602087" watchObservedRunningTime="2026-01-21 18:16:55.135260291 +0000 UTC m=+172.549222752" Jan 21 18:16:55 crc kubenswrapper[5099]: E0121 18:16:55.478653 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee64e319_d2fd_4a23_808e_a4ab684a16af.slice/crio-conmon-febf57ef5d3b5b3933f524609bb87fa63900036e577a74c06ffe7fccde4ea6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.750890 5099 generic.go:358] "Generic (PLEG): container finished" podID="d46bbfd2-40cb-40b4-b894-e9337f575676" containerID="bf1e5b97f5f9b0fa7e32067fa5e90429d7d5bc4306eb55141a1f706d7b817c59" exitCode=0 Jan 21 18:16:55 crc kubenswrapper[5099]: I0121 18:16:55.751137 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d46bbfd2-40cb-40b4-b894-e9337f575676","Type":"ContainerDied","Data":"bf1e5b97f5f9b0fa7e32067fa5e90429d7d5bc4306eb55141a1f706d7b817c59"} Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.490027 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.653472 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir\") pod \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.653636 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "64e6f34d-28ce-47dd-9279-1c5fc5fb8823" (UID: "64e6f34d-28ce-47dd-9279-1c5fc5fb8823"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.653699 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access\") pod \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\" (UID: \"64e6f34d-28ce-47dd-9279-1c5fc5fb8823\") " Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.653993 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.662995 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "64e6f34d-28ce-47dd-9279-1c5fc5fb8823" (UID: "64e6f34d-28ce-47dd-9279-1c5fc5fb8823"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.754905 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64e6f34d-28ce-47dd-9279-1c5fc5fb8823-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.788622 5099 generic.go:358] "Generic (PLEG): container finished" podID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerID="8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542" exitCode=0 Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.788708 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerDied","Data":"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542"} Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.793634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerStarted","Data":"7be0581f185328c6af4421036de97f76f662ec44ee2d376efb2cd0225cd73475"} Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.801049 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerStarted","Data":"ab2d4af6866471910be8ce0763e5b105f9c8b93930bd8a8311d5ccd5de844c26"} Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.803073 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.803855 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"64e6f34d-28ce-47dd-9279-1c5fc5fb8823","Type":"ContainerDied","Data":"2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913"} Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.803916 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c29e9fdb7111dbeb90e0e21fc8f758362cbf8af7d36c530de75d00014afd913" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.811683 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812375 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812397 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812428 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64e6f34d-28ce-47dd-9279-1c5fc5fb8823" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812434 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e6f34d-28ce-47dd-9279-1c5fc5fb8823" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812447 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e565af4-763d-4ad6-bebf-13f785bd61ad" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812453 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e565af4-763d-4ad6-bebf-13f785bd61ad" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812545 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e565af4-763d-4ad6-bebf-13f785bd61ad" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812556 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="64e6f34d-28ce-47dd-9279-1c5fc5fb8823" containerName="pruner" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.812565 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="e0db719c-cb3c-4c7d-ab76-20a341a011e6" containerName="kube-multus-additional-cni-plugins" Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.992506 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 18:16:56 crc kubenswrapper[5099]: I0121 18:16:56.992970 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.072641 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.072726 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.072914 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.173979 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.174331 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.174455 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.174170 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.176348 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.210965 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access\") pod \"installer-12-crc\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.314896 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.625957 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.626052 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.626639 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.626678 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.656058 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:57 crc kubenswrapper[5099]: I0121 18:16:57.656112 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.032190 5099 generic.go:358] "Generic (PLEG): container finished" podID="4202775a-8750-4d76-ad90-6a5703048787" containerID="b0a6aad0aacb2963dd316a565b918980130bf5a8cd0ed2199e65cf64fa7511d9" exitCode=0 Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.032254 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerDied","Data":"b0a6aad0aacb2963dd316a565b918980130bf5a8cd0ed2199e65cf64fa7511d9"} Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.057423 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerStarted","Data":"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76"} Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.121923 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x5qmz" podStartSLOduration=12.813875047 podStartE2EDuration="51.121890775s" podCreationTimestamp="2026-01-21 18:16:07 +0000 UTC" firstStartedPulling="2026-01-21 18:16:12.561926854 +0000 UTC m=+129.975889315" lastFinishedPulling="2026-01-21 18:16:50.869942582 +0000 UTC m=+168.283905043" observedRunningTime="2026-01-21 18:16:58.101513742 +0000 UTC m=+175.515476213" watchObservedRunningTime="2026-01-21 18:16:58.121890775 +0000 UTC m=+175.535853236" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.152430 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nnlkl" podStartSLOduration=12.006937111 podStartE2EDuration="49.15239895s" podCreationTimestamp="2026-01-21 18:16:09 +0000 UTC" firstStartedPulling="2026-01-21 18:16:13.763038485 +0000 UTC m=+131.177000936" lastFinishedPulling="2026-01-21 18:16:50.908500314 +0000 UTC m=+168.322462775" observedRunningTime="2026-01-21 18:16:58.148630746 +0000 UTC m=+175.562593227" watchObservedRunningTime="2026-01-21 18:16:58.15239895 +0000 UTC m=+175.566361411" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.382999 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.391324 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access\") pod \"d46bbfd2-40cb-40b4-b894-e9337f575676\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.391587 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir\") pod \"d46bbfd2-40cb-40b4-b894-e9337f575676\" (UID: \"d46bbfd2-40cb-40b4-b894-e9337f575676\") " Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.391815 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d46bbfd2-40cb-40b4-b894-e9337f575676" (UID: "d46bbfd2-40cb-40b4-b894-e9337f575676"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.408041 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d46bbfd2-40cb-40b4-b894-e9337f575676" (UID: "d46bbfd2-40cb-40b4-b894-e9337f575676"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.563319 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46bbfd2-40cb-40b4-b894-e9337f575676-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.563379 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d46bbfd2-40cb-40b4-b894-e9337f575676-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:16:58 crc kubenswrapper[5099]: I0121 18:16:58.631865 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.073948 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d46bbfd2-40cb-40b4-b894-e9337f575676","Type":"ContainerDied","Data":"cd610bb6f640b25314de4fb0b5a72ddf8216e16af7ead7129150d0ff2b934ef2"} Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.074331 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd610bb6f640b25314de4fb0b5a72ddf8216e16af7ead7129150d0ff2b934ef2" Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.074447 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.090421 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"91b29a2c-7464-4339-8b0b-218b0334f706","Type":"ContainerStarted","Data":"b1943a8045be6db3eebc2d2a12b0771a49219a12808875941c3facf6247d2894"} Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.176550 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsx2f" podStartSLOduration=13.347635169 podStartE2EDuration="50.176524374s" podCreationTimestamp="2026-01-21 18:16:09 +0000 UTC" firstStartedPulling="2026-01-21 18:16:13.960369572 +0000 UTC m=+131.374332033" lastFinishedPulling="2026-01-21 18:16:50.789258777 +0000 UTC m=+168.203221238" observedRunningTime="2026-01-21 18:16:59.171964351 +0000 UTC m=+176.585926812" watchObservedRunningTime="2026-01-21 18:16:59.176524374 +0000 UTC m=+176.590486835" Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.928037 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v9zdl" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="registry-server" probeResult="failure" output=< Jan 21 18:16:59 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:16:59 crc kubenswrapper[5099]: > Jan 21 18:16:59 crc kubenswrapper[5099]: I0121 18:16:59.934010 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6fsvr" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="registry-server" probeResult="failure" output=< Jan 21 18:16:59 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:16:59 crc kubenswrapper[5099]: > Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.097412 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerStarted","Data":"91728d268f5ae41d5139398062e2fdbf895695f6598bec52c5ee7420112ae38b"} Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.119329 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncssr" podStartSLOduration=16.712822317 podStartE2EDuration="50.119298347s" podCreationTimestamp="2026-01-21 18:16:10 +0000 UTC" firstStartedPulling="2026-01-21 18:16:17.373844867 +0000 UTC m=+134.787807328" lastFinishedPulling="2026-01-21 18:16:50.780320897 +0000 UTC m=+168.194283358" observedRunningTime="2026-01-21 18:17:00.118099017 +0000 UTC m=+177.532061478" watchObservedRunningTime="2026-01-21 18:17:00.119298347 +0000 UTC m=+177.533260808" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.235396 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xdblj" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="registry-server" probeResult="failure" output=< Jan 21 18:17:00 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:17:00 crc kubenswrapper[5099]: > Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.284467 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.284612 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.297371 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.297490 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.349632 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.407468 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.910311 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:00 crc kubenswrapper[5099]: I0121 18:17:00.910483 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.071208 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.071473 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.115521 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"91b29a2c-7464-4339-8b0b-218b0334f706","Type":"ContainerStarted","Data":"e35004d50902b2fa16da84ec1e12a9b14cc24a0a884fbc152739648598eefa86"} Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.171194 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.189700 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=5.189680755 podStartE2EDuration="5.189680755s" podCreationTimestamp="2026-01-21 18:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:17:01.147479322 +0000 UTC m=+178.561441783" watchObservedRunningTime="2026-01-21 18:17:01.189680755 +0000 UTC m=+178.603643216" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.205795 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.531888 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.531960 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:17:01 crc kubenswrapper[5099]: I0121 18:17:01.963609 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncssr" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="registry-server" probeResult="failure" output=< Jan 21 18:17:01 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:17:01 crc kubenswrapper[5099]: > Jan 21 18:17:02 crc kubenswrapper[5099]: I0121 18:17:02.140544 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsx2f" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="registry-server" probeResult="failure" output=< Jan 21 18:17:02 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:17:02 crc kubenswrapper[5099]: > Jan 21 18:17:03 crc kubenswrapper[5099]: I0121 18:17:03.825423 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:17:03 crc kubenswrapper[5099]: I0121 18:17:03.827097 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nnlkl" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="registry-server" containerID="cri-o://ab2d4af6866471910be8ce0763e5b105f9c8b93930bd8a8311d5ccd5de844c26" gracePeriod=2 Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.159199 5099 generic.go:358] "Generic (PLEG): container finished" podID="ec86143c-2662-474d-857f-b54aee6207b0" containerID="ab2d4af6866471910be8ce0763e5b105f9c8b93930bd8a8311d5ccd5de844c26" exitCode=0 Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.159372 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerDied","Data":"ab2d4af6866471910be8ce0763e5b105f9c8b93930bd8a8311d5ccd5de844c26"} Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.486413 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.600078 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities\") pod \"ec86143c-2662-474d-857f-b54aee6207b0\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.600134 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzmlr\" (UniqueName: \"kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr\") pod \"ec86143c-2662-474d-857f-b54aee6207b0\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.600200 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content\") pod \"ec86143c-2662-474d-857f-b54aee6207b0\" (UID: \"ec86143c-2662-474d-857f-b54aee6207b0\") " Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.601411 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities" (OuterVolumeSpecName: "utilities") pod "ec86143c-2662-474d-857f-b54aee6207b0" (UID: "ec86143c-2662-474d-857f-b54aee6207b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.613357 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec86143c-2662-474d-857f-b54aee6207b0" (UID: "ec86143c-2662-474d-857f-b54aee6207b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.618414 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr" (OuterVolumeSpecName: "kube-api-access-vzmlr") pod "ec86143c-2662-474d-857f-b54aee6207b0" (UID: "ec86143c-2662-474d-857f-b54aee6207b0"). InnerVolumeSpecName "kube-api-access-vzmlr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.702245 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.702285 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec86143c-2662-474d-857f-b54aee6207b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.702295 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzmlr\" (UniqueName: \"kubernetes.io/projected/ec86143c-2662-474d-857f-b54aee6207b0-kube-api-access-vzmlr\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.729126 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-zlpql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 18:17:04 crc kubenswrapper[5099]: I0121 18:17:04.729188 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zlpql" podUID="bee88171-b2f0-49bb-92aa-8a0d79d87cb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.170480 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnlkl" event={"ID":"ec86143c-2662-474d-857f-b54aee6207b0","Type":"ContainerDied","Data":"d28cdc45a3e754c99a313f8576a1fa9e7e65a75e2f0ec0f6b89d51030e597d19"} Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.171120 5099 scope.go:117] "RemoveContainer" containerID="ab2d4af6866471910be8ce0763e5b105f9c8b93930bd8a8311d5ccd5de844c26" Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.171380 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnlkl" Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.459102 5099 scope.go:117] "RemoveContainer" containerID="3e06708a1a9ba11fd3086c04396656520a0f1346f752e90c3e7008aa3bb39bfa" Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.484369 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.487565 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnlkl"] Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.502644 5099 scope.go:117] "RemoveContainer" containerID="432bc00de22f2ddaa0286e5140666815be5e781c63991ef00ae26121c59d6c2d" Jan 21 18:17:05 crc kubenswrapper[5099]: I0121 18:17:05.923726 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec86143c-2662-474d-857f-b54aee6207b0" path="/var/lib/kubelet/pods/ec86143c-2662-474d-857f-b54aee6207b0/volumes" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.608358 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.656686 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.656783 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.665578 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.737334 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.741717 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.746559 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.808195 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:17:07 crc kubenswrapper[5099]: I0121 18:17:07.867288 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:17:08 crc kubenswrapper[5099]: I0121 18:17:08.249834 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:08 crc kubenswrapper[5099]: I0121 18:17:08.831254 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:17:09 crc kubenswrapper[5099]: I0121 18:17:09.206261 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9zdl" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="registry-server" containerID="cri-o://78ae5115408c6dcccb3a4adb49a3bc3aed747695abc5fb33a2bd87b9ee88db5d" gracePeriod=2 Jan 21 18:17:09 crc kubenswrapper[5099]: I0121 18:17:09.420359 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.230279 5099 generic.go:358] "Generic (PLEG): container finished" podID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerID="78ae5115408c6dcccb3a4adb49a3bc3aed747695abc5fb33a2bd87b9ee88db5d" exitCode=0 Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.230502 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerDied","Data":"78ae5115408c6dcccb3a4adb49a3bc3aed747695abc5fb33a2bd87b9ee88db5d"} Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.231222 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x5qmz" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="registry-server" containerID="cri-o://7be0581f185328c6af4421036de97f76f662ec44ee2d376efb2cd0225cd73475" gracePeriod=2 Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.395723 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.508055 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities\") pod \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.508307 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content\") pod \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.508376 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggvht\" (UniqueName: \"kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht\") pod \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\" (UID: \"b73a0c1c-91ce-4902-bbcf-cf68e52e0236\") " Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.509583 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities" (OuterVolumeSpecName: "utilities") pod "b73a0c1c-91ce-4902-bbcf-cf68e52e0236" (UID: "b73a0c1c-91ce-4902-bbcf-cf68e52e0236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.519230 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht" (OuterVolumeSpecName: "kube-api-access-ggvht") pod "b73a0c1c-91ce-4902-bbcf-cf68e52e0236" (UID: "b73a0c1c-91ce-4902-bbcf-cf68e52e0236"). InnerVolumeSpecName "kube-api-access-ggvht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.538243 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b73a0c1c-91ce-4902-bbcf-cf68e52e0236" (UID: "b73a0c1c-91ce-4902-bbcf-cf68e52e0236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.609585 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.609937 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ggvht\" (UniqueName: \"kubernetes.io/projected/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-kube-api-access-ggvht\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.609966 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b73a0c1c-91ce-4902-bbcf-cf68e52e0236-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:10 crc kubenswrapper[5099]: I0121 18:17:10.963626 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.012912 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.141511 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.196827 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.247328 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerID="7be0581f185328c6af4421036de97f76f662ec44ee2d376efb2cd0225cd73475" exitCode=0 Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.247607 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerDied","Data":"7be0581f185328c6af4421036de97f76f662ec44ee2d376efb2cd0225cd73475"} Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.247696 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5qmz" event={"ID":"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d","Type":"ContainerDied","Data":"24532a9ecbd66301416e84f2d6cc17a024e02c4073fe452f629063fff5d5f1aa"} Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.247711 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24532a9ecbd66301416e84f2d6cc17a024e02c4073fe452f629063fff5d5f1aa" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.264515 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zdl" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.264465 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zdl" event={"ID":"b73a0c1c-91ce-4902-bbcf-cf68e52e0236","Type":"ContainerDied","Data":"f29c337e5870b2b15dce9eb6a197b542fb032e07b9a6d9512e7f82f28df4c245"} Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.264652 5099 scope.go:117] "RemoveContainer" containerID="78ae5115408c6dcccb3a4adb49a3bc3aed747695abc5fb33a2bd87b9ee88db5d" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.270878 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.300928 5099 scope.go:117] "RemoveContainer" containerID="b6e1d37071ba7f36cdda930b077cd2704f28df04c275221a52e8739d1cab337f" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.321286 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.325267 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9zdl"] Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.328402 5099 scope.go:117] "RemoveContainer" containerID="1477e8efa11a8826f33dc15e74ed84d13b88e34d2abed6ecafe308ce2db75e91" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.426615 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzdbt\" (UniqueName: \"kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt\") pod \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.426888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities\") pod \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.427044 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content\") pod \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\" (UID: \"eb54ba82-2a5f-46a9-8c2c-f59dc17d237d\") " Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.427991 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities" (OuterVolumeSpecName: "utilities") pod "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" (UID: "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.433839 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt" (OuterVolumeSpecName: "kube-api-access-gzdbt") pod "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" (UID: "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d"). InnerVolumeSpecName "kube-api-access-gzdbt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.479443 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" (UID: "eb54ba82-2a5f-46a9-8c2c-f59dc17d237d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.529142 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.529193 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.529207 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gzdbt\" (UniqueName: \"kubernetes.io/projected/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d-kube-api-access-gzdbt\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:11 crc kubenswrapper[5099]: I0121 18:17:11.921281 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" path="/var/lib/kubelet/pods/b73a0c1c-91ce-4902-bbcf-cf68e52e0236/volumes" Jan 21 18:17:12 crc kubenswrapper[5099]: I0121 18:17:12.276885 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5qmz" Jan 21 18:17:12 crc kubenswrapper[5099]: I0121 18:17:12.297879 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:17:12 crc kubenswrapper[5099]: I0121 18:17:12.300662 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x5qmz"] Jan 21 18:17:13 crc kubenswrapper[5099]: I0121 18:17:13.818886 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:17:13 crc kubenswrapper[5099]: I0121 18:17:13.819440 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncssr" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="registry-server" containerID="cri-o://91728d268f5ae41d5139398062e2fdbf895695f6598bec52c5ee7420112ae38b" gracePeriod=2 Jan 21 18:17:13 crc kubenswrapper[5099]: I0121 18:17:13.921856 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" path="/var/lib/kubelet/pods/eb54ba82-2a5f-46a9-8c2c-f59dc17d237d/volumes" Jan 21 18:17:14 crc kubenswrapper[5099]: I0121 18:17:14.730792 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:17:14 crc kubenswrapper[5099]: I0121 18:17:14.743592 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-zlpql" Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.299632 5099 generic.go:358] "Generic (PLEG): container finished" podID="4202775a-8750-4d76-ad90-6a5703048787" containerID="91728d268f5ae41d5139398062e2fdbf895695f6598bec52c5ee7420112ae38b" exitCode=0 Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.299671 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerDied","Data":"91728d268f5ae41d5139398062e2fdbf895695f6598bec52c5ee7420112ae38b"} Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.586780 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.699264 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plm55\" (UniqueName: \"kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55\") pod \"4202775a-8750-4d76-ad90-6a5703048787\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.699400 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content\") pod \"4202775a-8750-4d76-ad90-6a5703048787\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.699490 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities\") pod \"4202775a-8750-4d76-ad90-6a5703048787\" (UID: \"4202775a-8750-4d76-ad90-6a5703048787\") " Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.700723 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities" (OuterVolumeSpecName: "utilities") pod "4202775a-8750-4d76-ad90-6a5703048787" (UID: "4202775a-8750-4d76-ad90-6a5703048787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.707279 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55" (OuterVolumeSpecName: "kube-api-access-plm55") pod "4202775a-8750-4d76-ad90-6a5703048787" (UID: "4202775a-8750-4d76-ad90-6a5703048787"). InnerVolumeSpecName "kube-api-access-plm55". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.800803 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plm55\" (UniqueName: \"kubernetes.io/projected/4202775a-8750-4d76-ad90-6a5703048787-kube-api-access-plm55\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:15 crc kubenswrapper[5099]: I0121 18:17:15.801168 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.310684 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncssr" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.310682 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncssr" event={"ID":"4202775a-8750-4d76-ad90-6a5703048787","Type":"ContainerDied","Data":"b60882b21035216a52804c70b1ad5ba9115012220fb1e376e9b6aaad14797df8"} Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.310847 5099 scope.go:117] "RemoveContainer" containerID="91728d268f5ae41d5139398062e2fdbf895695f6598bec52c5ee7420112ae38b" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.333150 5099 scope.go:117] "RemoveContainer" containerID="b0a6aad0aacb2963dd316a565b918980130bf5a8cd0ed2199e65cf64fa7511d9" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.342649 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4202775a-8750-4d76-ad90-6a5703048787" (UID: "4202775a-8750-4d76-ad90-6a5703048787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.351984 5099 scope.go:117] "RemoveContainer" containerID="c590809ce905abe3a59ed4b7c7f7aff98fc18ba35b34b7a0f0f584f3257b4fcd" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.408642 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4202775a-8750-4d76-ad90-6a5703048787-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.645533 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:17:16 crc kubenswrapper[5099]: I0121 18:17:16.650463 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncssr"] Jan 21 18:17:17 crc kubenswrapper[5099]: I0121 18:17:17.920160 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4202775a-8750-4d76-ad90-6a5703048787" path="/var/lib/kubelet/pods/4202775a-8750-4d76-ad90-6a5703048787/volumes" Jan 21 18:17:25 crc kubenswrapper[5099]: I0121 18:17:25.758376 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 18:17:25 crc kubenswrapper[5099]: I0121 18:17:25.904099 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6qnjf"] Jan 21 18:17:31 crc kubenswrapper[5099]: I0121 18:17:31.391191 5099 ???:1] "http: TLS handshake error from 192.168.126.11:40570: no serving certificate available for the kubelet" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.635485 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636712 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636747 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636761 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636766 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636781 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636786 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636799 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636805 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636810 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636815 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636825 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636830 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636839 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636844 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636852 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636857 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636865 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636870 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636878 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d46bbfd2-40cb-40b4-b894-e9337f575676" containerName="pruner" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636884 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46bbfd2-40cb-40b4-b894-e9337f575676" containerName="pruner" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636896 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636902 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="extract-utilities" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636914 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636919 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="extract-content" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636928 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.636936 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.637052 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb54ba82-2a5f-46a9-8c2c-f59dc17d237d" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.637072 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b73a0c1c-91ce-4902-bbcf-cf68e52e0236" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.637078 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec86143c-2662-474d-857f-b54aee6207b0" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.637086 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4202775a-8750-4d76-ad90-6a5703048787" containerName="registry-server" Jan 21 18:17:38 crc kubenswrapper[5099]: I0121 18:17:38.637094 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d46bbfd2-40cb-40b4-b894-e9337f575676" containerName="pruner" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.213237 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.222412 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.222475 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223256 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223281 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223293 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223301 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223314 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223323 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223341 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223350 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223362 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223369 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223377 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223384 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223414 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223425 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223433 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223440 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223569 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223579 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223591 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223601 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223608 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223615 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223623 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223630 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.223638 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.224437 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018" gracePeriod=15 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.225027 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.225054 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.225379 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.225390 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.226064 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907" gracePeriod=15 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.226142 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58" gracePeriod=15 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.226155 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7" gracePeriod=15 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.226064 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e" gracePeriod=15 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.301159 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.301973 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302019 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302066 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302091 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302120 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302197 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302270 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302482 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.302569 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.305812 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: E0121 18:17:40.306752 5099 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.61:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404560 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404626 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404680 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404697 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404695 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404827 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404841 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404859 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405376 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404882 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404937 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405316 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405322 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405446 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405452 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.404916 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.405467 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.465004 5099 generic.go:358] "Generic (PLEG): container finished" podID="91b29a2c-7464-4339-8b0b-218b0334f706" containerID="e35004d50902b2fa16da84ec1e12a9b14cc24a0a884fbc152739648598eefa86" exitCode=0 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.465240 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"91b29a2c-7464-4339-8b0b-218b0334f706","Type":"ContainerDied","Data":"e35004d50902b2fa16da84ec1e12a9b14cc24a0a884fbc152739648598eefa86"} Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.466166 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.466446 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.468232 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.469489 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.470022 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e" exitCode=0 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.470041 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58" exitCode=0 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.470047 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907" exitCode=0 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.470054 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7" exitCode=2 Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.470116 5099 scope.go:117] "RemoveContainer" containerID="1dec0ab2e83799703e0d0ba5f91da97aaa7f611990710cb8ee7aa49e64a46994" Jan 21 18:17:40 crc kubenswrapper[5099]: I0121 18:17:40.607961 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:40 crc kubenswrapper[5099]: E0121 18:17:40.646958 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.61:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cd1d3a88e4a4e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:17:40.645964366 +0000 UTC m=+218.059926817,LastTimestamp:2026-01-21 18:17:40.645964366 +0000 UTC m=+218.059926817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.053787 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.054263 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.479655 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.482296 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8"} Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.482350 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"d24f410e0ab4fd995e3ea40de7fabf3fcf3dbb2a3526a2f6a0cb945ba185b2ca"} Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.482821 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:41 crc kubenswrapper[5099]: E0121 18:17:41.483441 5099 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.61:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.483603 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.484224 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.796386 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.797579 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.798316 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.931416 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir\") pod \"91b29a2c-7464-4339-8b0b-218b0334f706\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.931538 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "91b29a2c-7464-4339-8b0b-218b0334f706" (UID: "91b29a2c-7464-4339-8b0b-218b0334f706"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.932267 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock\") pod \"91b29a2c-7464-4339-8b0b-218b0334f706\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.932530 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access\") pod \"91b29a2c-7464-4339-8b0b-218b0334f706\" (UID: \"91b29a2c-7464-4339-8b0b-218b0334f706\") " Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.932381 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock" (OuterVolumeSpecName: "var-lock") pod "91b29a2c-7464-4339-8b0b-218b0334f706" (UID: "91b29a2c-7464-4339-8b0b-218b0334f706"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.933041 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.933133 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91b29a2c-7464-4339-8b0b-218b0334f706-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:41 crc kubenswrapper[5099]: I0121 18:17:41.938797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "91b29a2c-7464-4339-8b0b-218b0334f706" (UID: "91b29a2c-7464-4339-8b0b-218b0334f706"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.035287 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91b29a2c-7464-4339-8b0b-218b0334f706-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.505678 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"91b29a2c-7464-4339-8b0b-218b0334f706","Type":"ContainerDied","Data":"b1943a8045be6db3eebc2d2a12b0771a49219a12808875941c3facf6247d2894"} Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.505760 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1943a8045be6db3eebc2d2a12b0771a49219a12808875941c3facf6247d2894" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.505906 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.522278 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.760915 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.762703 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.763724 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.764377 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.849720 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.849840 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.849940 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.849973 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.850095 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.850123 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.850253 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.850222 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.851213 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.851243 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.851236 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.851252 5099 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.854333 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.952920 5099 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:42 crc kubenswrapper[5099]: I0121 18:17:42.952964 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.517101 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.520349 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018" exitCode=0 Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.520417 5099 scope.go:117] "RemoveContainer" containerID="335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.520611 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.540295 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.540625 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.550607 5099 scope.go:117] "RemoveContainer" containerID="a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.569687 5099 scope.go:117] "RemoveContainer" containerID="b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.588592 5099 scope.go:117] "RemoveContainer" containerID="77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.607100 5099 scope.go:117] "RemoveContainer" containerID="69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.628041 5099 scope.go:117] "RemoveContainer" containerID="3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.699765 5099 scope.go:117] "RemoveContainer" containerID="335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.700790 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e\": container with ID starting with 335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e not found: ID does not exist" containerID="335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.700827 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e"} err="failed to get container status \"335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e\": rpc error: code = NotFound desc = could not find container \"335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e\": container with ID starting with 335d170d3d75b85cf52d2793bf8e5cdee4c8a7903ec3159e8f7a82cd0db5914e not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.700859 5099 scope.go:117] "RemoveContainer" containerID="a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.701378 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\": container with ID starting with a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58 not found: ID does not exist" containerID="a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.701414 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58"} err="failed to get container status \"a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\": rpc error: code = NotFound desc = could not find container \"a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58\": container with ID starting with a571febb49a23145b3d2cad3e4cc539fe35f0d8fdd7c4876c9868a1d1889ac58 not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.701439 5099 scope.go:117] "RemoveContainer" containerID="b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.702038 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\": container with ID starting with b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907 not found: ID does not exist" containerID="b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.702073 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907"} err="failed to get container status \"b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\": rpc error: code = NotFound desc = could not find container \"b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907\": container with ID starting with b52ee1ab5f30ea08acd0bcb057da0e09c87c2f8116e08d2d2d7bd32030c95907 not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.702113 5099 scope.go:117] "RemoveContainer" containerID="77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.702562 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\": container with ID starting with 77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7 not found: ID does not exist" containerID="77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.702585 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7"} err="failed to get container status \"77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\": rpc error: code = NotFound desc = could not find container \"77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7\": container with ID starting with 77fff9816418e4672b53ddb5f6696e449760e9cf1da327c01939f0df0bb333f7 not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.702597 5099 scope.go:117] "RemoveContainer" containerID="69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.703149 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\": container with ID starting with 69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018 not found: ID does not exist" containerID="69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.703264 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018"} err="failed to get container status \"69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\": rpc error: code = NotFound desc = could not find container \"69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018\": container with ID starting with 69214b9af392a2ff7edbb9c7275209780398e51d77d93f6fc12bdb6df10c0018 not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.703365 5099 scope.go:117] "RemoveContainer" containerID="3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243" Jan 21 18:17:43 crc kubenswrapper[5099]: E0121 18:17:43.704528 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\": container with ID starting with 3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243 not found: ID does not exist" containerID="3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.704682 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243"} err="failed to get container status \"3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\": rpc error: code = NotFound desc = could not find container \"3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243\": container with ID starting with 3d3315339d0385d2d2682ac27476be8f619f5936da9319a84a35621133d98243 not found: ID does not exist" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.920069 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.920546 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:43 crc kubenswrapper[5099]: I0121 18:17:43.926324 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.738254 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.739634 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.740177 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.740572 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.741161 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:45 crc kubenswrapper[5099]: I0121 18:17:45.741206 5099 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.741568 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="200ms" Jan 21 18:17:45 crc kubenswrapper[5099]: E0121 18:17:45.942837 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="400ms" Jan 21 18:17:46 crc kubenswrapper[5099]: E0121 18:17:46.344342 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="800ms" Jan 21 18:17:47 crc kubenswrapper[5099]: E0121 18:17:47.145622 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="1.6s" Jan 21 18:17:48 crc kubenswrapper[5099]: E0121 18:17:48.747010 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="3.2s" Jan 21 18:17:49 crc kubenswrapper[5099]: E0121 18:17:49.628114 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.61:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cd1d3a88e4a4e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 18:17:40.645964366 +0000 UTC m=+218.059926817,LastTimestamp:2026-01-21 18:17:40.645964366 +0000 UTC m=+218.059926817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 18:17:50 crc kubenswrapper[5099]: I0121 18:17:50.958825 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" containerName="oauth-openshift" containerID="cri-o://b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea" gracePeriod=15 Jan 21 18:17:51 crc kubenswrapper[5099]: E0121 18:17:51.948936 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.61:6443: connect: connection refused" interval="6.4s" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.065053 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.065206 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.499510 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.500876 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.501187 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.583618 5099 generic.go:358] "Generic (PLEG): container finished" podID="39b31197-feb5-4a81-8dca-de4b873dc013" containerID="b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea" exitCode=0 Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.583783 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" event={"ID":"39b31197-feb5-4a81-8dca-de4b873dc013","Type":"ContainerDied","Data":"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea"} Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.583830 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" event={"ID":"39b31197-feb5-4a81-8dca-de4b873dc013","Type":"ContainerDied","Data":"162061321f5a4c16b240cfbee6a8e08376d6c8b648c3ca85315663b4fa746474"} Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.583853 5099 scope.go:117] "RemoveContainer" containerID="b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.584300 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.585322 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.586005 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.599577 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.599658 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.599809 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600011 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600164 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600204 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600302 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600388 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600462 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600569 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600636 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600691 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600721 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm4mk\" (UniqueName: \"kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.600770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir\") pod \"39b31197-feb5-4a81-8dca-de4b873dc013\" (UID: \"39b31197-feb5-4a81-8dca-de4b873dc013\") " Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.601078 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.601374 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.601624 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.601650 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.601665 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.607662 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.608601 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.608842 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.609594 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk" (OuterVolumeSpecName: "kube-api-access-xm4mk") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "kube-api-access-xm4mk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.609716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.609925 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.610803 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.611535 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.616301 5099 scope.go:117] "RemoveContainer" containerID="b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea" Jan 21 18:17:52 crc kubenswrapper[5099]: E0121 18:17:52.617674 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea\": container with ID starting with b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea not found: ID does not exist" containerID="b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.617791 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea"} err="failed to get container status \"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea\": rpc error: code = NotFound desc = could not find container \"b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea\": container with ID starting with b73ca0e642c81b58bc7867dc41a0cee874a89509b144cd6128f9ce8bde3179ea not found: ID does not exist" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.622363 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "39b31197-feb5-4a81-8dca-de4b873dc013" (UID: "39b31197-feb5-4a81-8dca-de4b873dc013"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702614 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702679 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702694 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702710 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702727 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702761 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702775 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702788 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702801 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702815 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xm4mk\" (UniqueName: \"kubernetes.io/projected/39b31197-feb5-4a81-8dca-de4b873dc013-kube-api-access-xm4mk\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702833 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39b31197-feb5-4a81-8dca-de4b873dc013-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702849 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702865 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.702878 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39b31197-feb5-4a81-8dca-de4b873dc013-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.902359 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:52 crc kubenswrapper[5099]: I0121 18:17:52.902958 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.917602 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.917631 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.917936 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.918120 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.918266 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.937945 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.938163 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:53 crc kubenswrapper[5099]: E0121 18:17:53.938978 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:53 crc kubenswrapper[5099]: I0121 18:17:53.939387 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.604443 5099 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="f7fb4a0f7da4c47f74406ae984af45d5a5b8d75473b769210634131d189e4c99" exitCode=0 Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.604576 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"f7fb4a0f7da4c47f74406ae984af45d5a5b8d75473b769210634131d189e4c99"} Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.605205 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5f604a209a922d6c5f0c4f574becbd76f693b605c700efe1b0b6298df4c8829c"} Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.605676 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.605696 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:54 crc kubenswrapper[5099]: E0121 18:17:54.606502 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.606647 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.607329 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.610056 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.610116 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a" exitCode=1 Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.610189 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a"} Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.611266 5099 scope.go:117] "RemoveContainer" containerID="af7a566641ee4a32e0af093712f5a413ba74a4178f0ad380ede1b059032e730a" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.611383 5099 status_manager.go:895] "Failed to get status for pod" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" pod="openshift-authentication/oauth-openshift-66458b6674-6qnjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6qnjf\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.612232 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:54 crc kubenswrapper[5099]: I0121 18:17:54.612812 5099 status_manager.go:895] "Failed to get status for pod" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.61:6443: connect: connection refused" Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.259551 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.620663 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c4dc5ea54419fa273db861e4cad10e39cfbb32735b0681f5c7418ac106cda630"} Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.620724 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b02cb961ed02623d480de3333b2eed2f3655a39d70cfe84efa45d8953d22fe01"} Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.620761 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a063eed967bf04187fb22e8c8360544653d02d366224a0be40cb233210a5622b"} Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.627249 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:17:55 crc kubenswrapper[5099]: I0121 18:17:55.627711 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6683ac309e96000b8b7f76eb699aa71c6399c1e6580bb454c619d52fbd88364c"} Jan 21 18:17:56 crc kubenswrapper[5099]: I0121 18:17:56.638442 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"50826db4c37ff3264da60c8aca8513eb108568b0326b0b29bbd0f5de4dc6fb28"} Jan 21 18:17:56 crc kubenswrapper[5099]: I0121 18:17:56.638986 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:56 crc kubenswrapper[5099]: I0121 18:17:56.639032 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:17:56 crc kubenswrapper[5099]: I0121 18:17:56.639113 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:56 crc kubenswrapper[5099]: I0121 18:17:56.639139 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b7ef81d1f53205c8232721c8736fc31b72b116f878aff0485357a88d4ca99d9f"} Jan 21 18:17:58 crc kubenswrapper[5099]: I0121 18:17:58.940006 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:58 crc kubenswrapper[5099]: I0121 18:17:58.940432 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:17:58 crc kubenswrapper[5099]: I0121 18:17:58.946667 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.170491 5099 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.170533 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.770816 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.771370 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.775798 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.835944 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.836199 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 18:18:02 crc kubenswrapper[5099]: I0121 18:18:02.836318 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 18:18:03 crc kubenswrapper[5099]: I0121 18:18:03.781473 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:03 crc kubenswrapper[5099]: I0121 18:18:03.781513 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:03 crc kubenswrapper[5099]: I0121 18:18:03.936125 5099 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="33973738-c1e5-4aec-8582-691708c0e68d" Jan 21 18:18:05 crc kubenswrapper[5099]: I0121 18:18:05.258853 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:18:12 crc kubenswrapper[5099]: I0121 18:18:12.649613 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 18:18:12 crc kubenswrapper[5099]: I0121 18:18:12.700614 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 18:18:12 crc kubenswrapper[5099]: I0121 18:18:12.788038 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 18:18:12 crc kubenswrapper[5099]: I0121 18:18:12.843700 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:18:12 crc kubenswrapper[5099]: I0121 18:18:12.848258 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 18:18:13 crc kubenswrapper[5099]: I0121 18:18:13.018912 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 18:18:13 crc kubenswrapper[5099]: I0121 18:18:13.155287 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 18:18:13 crc kubenswrapper[5099]: I0121 18:18:13.196306 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 18:18:13 crc kubenswrapper[5099]: I0121 18:18:13.225282 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 18:18:13 crc kubenswrapper[5099]: I0121 18:18:13.772803 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.037925 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.207420 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.216227 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.236622 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.280917 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.398447 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.624631 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.925956 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 18:18:14 crc kubenswrapper[5099]: I0121 18:18:14.941404 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.026449 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.060305 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.062301 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.083960 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.248992 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.265205 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.280040 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.280184 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.429664 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.440449 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.446889 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-6qnjf"] Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.447313 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.447899 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.447948 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="173cce9e-0a3e-4d85-b057-083e13852fa4" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.453419 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.471964 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.471937244 podStartE2EDuration="13.471937244s" podCreationTimestamp="2026-01-21 18:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:18:15.468249291 +0000 UTC m=+252.882211752" watchObservedRunningTime="2026-01-21 18:18:15.471937244 +0000 UTC m=+252.885899715" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.490703 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.501201 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.501347 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.501724 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.639849 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.643622 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.658263 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.664456 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.666876 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.727154 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.846849 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.923516 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" path="/var/lib/kubelet/pods/39b31197-feb5-4a81-8dca-de4b873dc013/volumes" Jan 21 18:18:15 crc kubenswrapper[5099]: I0121 18:18:15.945625 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.061819 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.130839 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.159755 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.199435 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.261498 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.285247 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.651522 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.654611 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.667950 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.688408 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.742720 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.774978 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.788201 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.788805 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.822153 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 18:18:16 crc kubenswrapper[5099]: I0121 18:18:16.891556 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.020181 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.069146 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.084038 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.173807 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.203470 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.224432 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.261865 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.409126 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.413890 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.428378 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.492948 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.579650 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.606330 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.875939 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.905591 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 18:18:17 crc kubenswrapper[5099]: I0121 18:18:17.928688 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.013305 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.126937 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.158470 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.195754 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.206182 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.226965 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.250847 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.259609 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.325257 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.356632 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.380146 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.458561 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.685063 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.719481 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.769126 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.769214 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.858572 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.893927 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 18:18:18 crc kubenswrapper[5099]: I0121 18:18:18.930435 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.125702 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.155411 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.157590 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.187130 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.264230 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.333656 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.408017 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.408418 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.486230 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.486829 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.653965 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.694825 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.741970 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.825167 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:19 crc kubenswrapper[5099]: I0121 18:18:19.913049 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.035723 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.120428 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.154952 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.195778 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.206520 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.241469 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.278181 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.292477 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.352151 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.445099 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.528815 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.610396 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.635812 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.831093 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.834084 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.837072 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.938780 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 18:18:20 crc kubenswrapper[5099]: I0121 18:18:20.959535 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.008583 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.064193 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.079066 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.124521 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.214652 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.321000 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.363780 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.435033 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.529314 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.549792 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.651772 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.660162 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.718786 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.777414 5099 ???:1] "http: TLS handshake error from 192.168.126.11:35816: no serving certificate available for the kubelet" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.786178 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.845281 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.897374 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.920651 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 18:18:21 crc kubenswrapper[5099]: I0121 18:18:21.921896 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.004721 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.020545 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.064183 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.064218 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.064277 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.120239 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.186315 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.202078 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.309595 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.393357 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.500054 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.516010 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.520727 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.615206 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.700505 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.740092 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.828339 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.936088 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.951277 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 18:18:22 crc kubenswrapper[5099]: I0121 18:18:22.989723 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.000951 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.011238 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.080756 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.162516 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.186991 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.191402 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.195221 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.207575 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.221455 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.253713 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.270108 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.308699 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.340472 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.347533 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.361356 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.391257 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.391475 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.449227 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.467365 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.663506 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.717129 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.874070 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.889539 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:23 crc kubenswrapper[5099]: I0121 18:18:23.924037 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.017245 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.049136 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.168475 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.176126 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.234546 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.260073 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.273944 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.294076 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.328940 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.441667 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.446316 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.584490 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.625343 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.701540 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.778336 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.787243 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.805627 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.822260 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.838248 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.864699 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.864982 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8" gracePeriod=5 Jan 21 18:18:24 crc kubenswrapper[5099]: I0121 18:18:24.936590 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.036302 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.039001 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.095338 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.126128 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.219790 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.247893 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.253887 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.384749 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.411535 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.480283 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.542043 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.552000 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.663964 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.704081 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.810344 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.823646 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.836454 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 18:18:25 crc kubenswrapper[5099]: I0121 18:18:25.882899 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.051483 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.056669 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.125434 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.175512 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.369202 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.377006 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f988f7dc9-vqhhz"] Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.378243 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.378446 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.378788 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" containerName="oauth-openshift" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.378973 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" containerName="oauth-openshift" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.379160 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" containerName="installer" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.379328 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" containerName="installer" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.379925 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="91b29a2c-7464-4339-8b0b-218b0334f706" containerName="installer" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.380152 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.380327 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="39b31197-feb5-4a81-8dca-de4b873dc013" containerName="oauth-openshift" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.409703 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f988f7dc9-vqhhz"] Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.410249 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.413173 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.413404 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.413623 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.418525 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.418914 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.419046 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.419056 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.419020 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.419007 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.418964 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.419214 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.420571 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.433599 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.435313 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.437035 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494248 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494330 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwz2\" (UniqueName: \"kubernetes.io/projected/5fbc19e7-48e7-40b9-bf8f-607183384ad2-kube-api-access-rrwz2\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494399 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-session\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494491 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494565 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-policies\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494646 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-error\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494696 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494760 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-router-certs\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494794 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494847 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-dir\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494940 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.494991 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-login\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.495069 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-service-ca\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.495102 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.596748 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-dir\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597092 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597226 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-login\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597351 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-service-ca\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597472 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597557 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597650 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwz2\" (UniqueName: \"kubernetes.io/projected/5fbc19e7-48e7-40b9-bf8f-607183384ad2-kube-api-access-rrwz2\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597787 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-session\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597879 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.597972 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-policies\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.598058 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-error\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.598160 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.598318 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-router-certs\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.598685 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.599076 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.596876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-dir\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.599619 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-service-ca\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.599837 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-audit-policies\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.601057 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.603455 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-error\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.603490 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.603503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-router-certs\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.603896 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-login\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.604306 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.604466 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.606090 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-system-session\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.618210 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5fbc19e7-48e7-40b9-bf8f-607183384ad2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.618470 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwz2\" (UniqueName: \"kubernetes.io/projected/5fbc19e7-48e7-40b9-bf8f-607183384ad2-kube-api-access-rrwz2\") pod \"oauth-openshift-f988f7dc9-vqhhz\" (UID: \"5fbc19e7-48e7-40b9-bf8f-607183384ad2\") " pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.643643 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.733442 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.878294 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 18:18:26 crc kubenswrapper[5099]: I0121 18:18:26.972719 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f988f7dc9-vqhhz"] Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.090197 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.150819 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.184525 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.904037 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.932900 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" event={"ID":"5fbc19e7-48e7-40b9-bf8f-607183384ad2","Type":"ContainerStarted","Data":"4c5c0cb915976d62f2a3e7d6ba5eb310008b131e624bb5bbca648b06db1842e7"} Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.932975 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" event={"ID":"5fbc19e7-48e7-40b9-bf8f-607183384ad2","Type":"ContainerStarted","Data":"a83b56a330e9e2c609f49d5836b3e9af83ccf68c089512e987def94570440027"} Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.933764 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:27 crc kubenswrapper[5099]: I0121 18:18:27.957414 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" podStartSLOduration=62.957393822 podStartE2EDuration="1m2.957393822s" podCreationTimestamp="2026-01-21 18:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:18:27.955893183 +0000 UTC m=+265.369855644" watchObservedRunningTime="2026-01-21 18:18:27.957393822 +0000 UTC m=+265.371356303" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.059204 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f988f7dc9-vqhhz" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.060534 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.220382 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.364265 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.553236 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 18:18:28 crc kubenswrapper[5099]: I0121 18:18:28.787432 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 18:18:29 crc kubenswrapper[5099]: I0121 18:18:29.005272 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.001659 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.450407 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.450494 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.452198 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555602 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555646 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555678 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555702 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555744 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555801 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555815 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.555886 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.556444 5099 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.556463 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.556479 5099 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.556489 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.567499 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.658197 5099 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.953824 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.954442 5099 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8" exitCode=137 Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.954553 5099 scope.go:117] "RemoveContainer" containerID="835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.954622 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.976408 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.976886 5099 scope.go:117] "RemoveContainer" containerID="835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8" Jan 21 18:18:30 crc kubenswrapper[5099]: E0121 18:18:30.977490 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8\": container with ID starting with 835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8 not found: ID does not exist" containerID="835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8" Jan 21 18:18:30 crc kubenswrapper[5099]: I0121 18:18:30.977611 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8"} err="failed to get container status \"835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8\": rpc error: code = NotFound desc = could not find container \"835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8\": container with ID starting with 835a0e0e5425d8ab724a9d09077873178b8899a3a0b7a3fe02ffdae32f7e6bd8 not found: ID does not exist" Jan 21 18:18:31 crc kubenswrapper[5099]: I0121 18:18:31.922158 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 21 18:18:44 crc kubenswrapper[5099]: I0121 18:18:44.042667 5099 generic.go:358] "Generic (PLEG): container finished" podID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerID="fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19" exitCode=0 Jan 21 18:18:44 crc kubenswrapper[5099]: I0121 18:18:44.042811 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerDied","Data":"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19"} Jan 21 18:18:44 crc kubenswrapper[5099]: I0121 18:18:44.044267 5099 scope.go:117] "RemoveContainer" containerID="fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19" Jan 21 18:18:45 crc kubenswrapper[5099]: I0121 18:18:45.051651 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerStarted","Data":"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891"} Jan 21 18:18:45 crc kubenswrapper[5099]: I0121 18:18:45.052082 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:18:45 crc kubenswrapper[5099]: I0121 18:18:45.055122 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:18:50 crc kubenswrapper[5099]: I0121 18:18:50.883284 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:18:50 crc kubenswrapper[5099]: I0121 18:18:50.884522 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" podUID="85ddc24f-5591-4300-9269-cbc659dc7b4f" containerName="controller-manager" containerID="cri-o://3aee521a344ef0d410860d95f89e5e08d1609ba13c6f9cb6a92e0275b7e865b6" gracePeriod=30 Jan 21 18:18:50 crc kubenswrapper[5099]: I0121 18:18:50.933699 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:18:50 crc kubenswrapper[5099]: I0121 18:18:50.934116 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" podUID="9f61a6cf-7081-41ed-9e89-05212a634fb0" containerName="route-controller-manager" containerID="cri-o://b91f85dd2b12063e4eebbc8521ea0027ab3759849983328aba120ab372a1e03e" gracePeriod=30 Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.089454 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f61a6cf-7081-41ed-9e89-05212a634fb0" containerID="b91f85dd2b12063e4eebbc8521ea0027ab3759849983328aba120ab372a1e03e" exitCode=0 Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.089536 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" event={"ID":"9f61a6cf-7081-41ed-9e89-05212a634fb0","Type":"ContainerDied","Data":"b91f85dd2b12063e4eebbc8521ea0027ab3759849983328aba120ab372a1e03e"} Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.091959 5099 generic.go:358] "Generic (PLEG): container finished" podID="85ddc24f-5591-4300-9269-cbc659dc7b4f" containerID="3aee521a344ef0d410860d95f89e5e08d1609ba13c6f9cb6a92e0275b7e865b6" exitCode=0 Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.092135 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" event={"ID":"85ddc24f-5591-4300-9269-cbc659dc7b4f","Type":"ContainerDied","Data":"3aee521a344ef0d410860d95f89e5e08d1609ba13c6f9cb6a92e0275b7e865b6"} Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.265892 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.300865 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.302213 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303267 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="85ddc24f-5591-4300-9269-cbc659dc7b4f" containerName="controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303290 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ddc24f-5591-4300-9269-cbc659dc7b4f" containerName="controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303332 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f61a6cf-7081-41ed-9e89-05212a634fb0" containerName="route-controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303341 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f61a6cf-7081-41ed-9e89-05212a634fb0" containerName="route-controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303551 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f61a6cf-7081-41ed-9e89-05212a634fb0" containerName="route-controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.303571 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="85ddc24f-5591-4300-9269-cbc659dc7b4f" containerName="controller-manager" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.307774 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.315358 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.338696 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.346528 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.353937 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.389156 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config\") pod \"9f61a6cf-7081-41ed-9e89-05212a634fb0\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.389342 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca\") pod \"9f61a6cf-7081-41ed-9e89-05212a634fb0\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.389442 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.390453 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f61a6cf-7081-41ed-9e89-05212a634fb0" (UID: "9f61a6cf-7081-41ed-9e89-05212a634fb0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.390860 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config" (OuterVolumeSpecName: "config") pod "9f61a6cf-7081-41ed-9e89-05212a634fb0" (UID: "9f61a6cf-7081-41ed-9e89-05212a634fb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391393 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp\") pod \"9f61a6cf-7081-41ed-9e89-05212a634fb0\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391531 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392452 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert\") pod \"9f61a6cf-7081-41ed-9e89-05212a634fb0\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391468 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391756 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp" (OuterVolumeSpecName: "tmp") pod "9f61a6cf-7081-41ed-9e89-05212a634fb0" (UID: "9f61a6cf-7081-41ed-9e89-05212a634fb0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.391770 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp" (OuterVolumeSpecName: "tmp") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config" (OuterVolumeSpecName: "config") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392563 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgg79\" (UniqueName: \"kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392593 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392628 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca\") pod \"85ddc24f-5591-4300-9269-cbc659dc7b4f\" (UID: \"85ddc24f-5591-4300-9269-cbc659dc7b4f\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392690 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7rm\" (UniqueName: \"kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm\") pod \"9f61a6cf-7081-41ed-9e89-05212a634fb0\" (UID: \"9f61a6cf-7081-41ed-9e89-05212a634fb0\") " Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392850 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392911 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.392953 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393003 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393061 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393142 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9wnd\" (UniqueName: \"kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393964 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f61a6cf-7081-41ed-9e89-05212a634fb0-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393994 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.394009 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.394023 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f61a6cf-7081-41ed-9e89-05212a634fb0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.394036 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.394050 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ddc24f-5591-4300-9269-cbc659dc7b4f-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.393880 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.400879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.400982 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f61a6cf-7081-41ed-9e89-05212a634fb0" (UID: "9f61a6cf-7081-41ed-9e89-05212a634fb0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.401002 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79" (OuterVolumeSpecName: "kube-api-access-lgg79") pod "85ddc24f-5591-4300-9269-cbc659dc7b4f" (UID: "85ddc24f-5591-4300-9269-cbc659dc7b4f"). InnerVolumeSpecName "kube-api-access-lgg79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.401052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm" (OuterVolumeSpecName: "kube-api-access-8c7rm") pod "9f61a6cf-7081-41ed-9e89-05212a634fb0" (UID: "9f61a6cf-7081-41ed-9e89-05212a634fb0"). InnerVolumeSpecName "kube-api-access-8c7rm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.510362 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.511630 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.511705 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.511791 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8lwb\" (UniqueName: \"kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.511865 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.511933 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512007 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512049 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512368 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9wnd\" (UniqueName: \"kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512514 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512621 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512725 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f61a6cf-7081-41ed-9e89-05212a634fb0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512763 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgg79\" (UniqueName: \"kubernetes.io/projected/85ddc24f-5591-4300-9269-cbc659dc7b4f-kube-api-access-lgg79\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512778 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ddc24f-5591-4300-9269-cbc659dc7b4f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512792 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ddc24f-5591-4300-9269-cbc659dc7b4f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.512811 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8c7rm\" (UniqueName: \"kubernetes.io/projected/9f61a6cf-7081-41ed-9e89-05212a634fb0-kube-api-access-8c7rm\") on node \"crc\" DevicePath \"\"" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.513408 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.517054 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.517078 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.519109 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.527945 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.535603 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9wnd\" (UniqueName: \"kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd\") pod \"controller-manager-6cc894f6b5-z2gqx\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615008 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615086 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r8lwb\" (UniqueName: \"kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615145 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615182 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615249 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.615994 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.616236 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.616423 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.620456 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.630934 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.637053 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8lwb\" (UniqueName: \"kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb\") pod \"route-controller-manager-5558b976b5-w6wqd\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.671967 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.906078 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:18:51 crc kubenswrapper[5099]: I0121 18:18:51.954987 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:18:51 crc kubenswrapper[5099]: W0121 18:18:51.969510 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a7d04e_abc7_4b9c_9c51_fce2b2c699b7.slice/crio-bfe7a9869031d5afc3a897de574558439f21a92fec88d08ce3013247c2c7760a WatchSource:0}: Error finding container bfe7a9869031d5afc3a897de574558439f21a92fec88d08ce3013247c2c7760a: Status 404 returned error can't find the container with id bfe7a9869031d5afc3a897de574558439f21a92fec88d08ce3013247c2c7760a Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.067962 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.068752 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.068898 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.069808 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.069958 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b" gracePeriod=600 Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.103874 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" event={"ID":"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7","Type":"ContainerStarted","Data":"bfe7a9869031d5afc3a897de574558439f21a92fec88d08ce3013247c2c7760a"} Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.105249 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" event={"ID":"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b","Type":"ContainerStarted","Data":"dd5a51be1341f553106f8a34e5e9f90bdbb7c7c6d73e691bfa512708d08382ea"} Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.105286 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" event={"ID":"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b","Type":"ContainerStarted","Data":"6c56d24925b9585b3b3884e4c0b4c95b17e5e754ad89c4ebbd283102577079cf"} Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.106517 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.108256 5099 patch_prober.go:28] interesting pod/controller-manager-6cc894f6b5-z2gqx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.109031 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.109170 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" event={"ID":"9f61a6cf-7081-41ed-9e89-05212a634fb0","Type":"ContainerDied","Data":"b2847d06fec6892172f8002db70a8b13c1d50df59c8382262e8c67bd6faceb79"} Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.109209 5099 scope.go:117] "RemoveContainer" containerID="b91f85dd2b12063e4eebbc8521ea0027ab3759849983328aba120ab372a1e03e" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.109417 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.113884 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" event={"ID":"85ddc24f-5591-4300-9269-cbc659dc7b4f","Type":"ContainerDied","Data":"b0e22df1402c9c6886021823db05b674c1cdfd8195a2b331f18dacd5c80ee76f"} Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.113917 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5pwm7" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.133213 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" podStartSLOduration=2.133187738 podStartE2EDuration="2.133187738s" podCreationTimestamp="2026-01-21 18:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:18:52.131491924 +0000 UTC m=+289.545454385" watchObservedRunningTime="2026-01-21 18:18:52.133187738 +0000 UTC m=+289.547150199" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.146832 5099 scope.go:117] "RemoveContainer" containerID="3aee521a344ef0d410860d95f89e5e08d1609ba13c6f9cb6a92e0275b7e865b6" Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.161247 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.167861 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5pwm7"] Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.182927 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:18:52 crc kubenswrapper[5099]: I0121 18:18:52.185208 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-5p85q"] Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.123896 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" event={"ID":"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7","Type":"ContainerStarted","Data":"3b57b1abb99b748be06d4d438a11c05d645c21595b02cac76a427d549d875fa8"} Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.124823 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.130420 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b" exitCode=0 Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.130776 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b"} Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.130945 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.131049 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b"} Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.137286 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.143264 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" podStartSLOduration=2.143247873 podStartE2EDuration="2.143247873s" podCreationTimestamp="2026-01-21 18:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:18:53.142491644 +0000 UTC m=+290.556454115" watchObservedRunningTime="2026-01-21 18:18:53.143247873 +0000 UTC m=+290.557210334" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.339271 5099 ???:1] "http: TLS handshake error from 192.168.126.11:35378: no serving certificate available for the kubelet" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.929065 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ddc24f-5591-4300-9269-cbc659dc7b4f" path="/var/lib/kubelet/pods/85ddc24f-5591-4300-9269-cbc659dc7b4f/volumes" Jan 21 18:18:53 crc kubenswrapper[5099]: I0121 18:18:53.929954 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f61a6cf-7081-41ed-9e89-05212a634fb0" path="/var/lib/kubelet/pods/9f61a6cf-7081-41ed-9e89-05212a634fb0/volumes" Jan 21 18:19:01 crc kubenswrapper[5099]: I0121 18:19:01.263718 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:19:01 crc kubenswrapper[5099]: I0121 18:19:01.266654 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerName="controller-manager" containerID="cri-o://dd5a51be1341f553106f8a34e5e9f90bdbb7c7c6d73e691bfa512708d08382ea" gracePeriod=30 Jan 21 18:19:01 crc kubenswrapper[5099]: I0121 18:19:01.292224 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:19:01 crc kubenswrapper[5099]: I0121 18:19:01.292906 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" podUID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" containerName="route-controller-manager" containerID="cri-o://3b57b1abb99b748be06d4d438a11c05d645c21595b02cac76a427d549d875fa8" gracePeriod=30 Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.188097 5099 generic.go:358] "Generic (PLEG): container finished" podID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" containerID="3b57b1abb99b748be06d4d438a11c05d645c21595b02cac76a427d549d875fa8" exitCode=0 Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.188214 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" event={"ID":"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7","Type":"ContainerDied","Data":"3b57b1abb99b748be06d4d438a11c05d645c21595b02cac76a427d549d875fa8"} Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.191503 5099 generic.go:358] "Generic (PLEG): container finished" podID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerID="dd5a51be1341f553106f8a34e5e9f90bdbb7c7c6d73e691bfa512708d08382ea" exitCode=0 Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.191643 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" event={"ID":"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b","Type":"ContainerDied","Data":"dd5a51be1341f553106f8a34e5e9f90bdbb7c7c6d73e691bfa512708d08382ea"} Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.341403 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.380630 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.381233 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" containerName="route-controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.381255 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" containerName="route-controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.381360 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" containerName="route-controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.386474 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.411420 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476409 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert\") pod \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476571 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8lwb\" (UniqueName: \"kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb\") pod \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp\") pod \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476648 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca\") pod \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476667 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config\") pod \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\" (UID: \"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.476857 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsztf\" (UniqueName: \"kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477096 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477216 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp" (OuterVolumeSpecName: "tmp") pod "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" (UID: "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477377 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477589 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477617 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config" (OuterVolumeSpecName: "config") pod "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" (UID: "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477784 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477840 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" (UID: "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477895 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.477982 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.489092 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" (UID: "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.489164 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb" (OuterVolumeSpecName: "kube-api-access-r8lwb") pod "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" (UID: "39a7d04e-abc7-4b9c-9c51-fce2b2c699b7"). InnerVolumeSpecName "kube-api-access-r8lwb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.508706 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.548556 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.549215 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerName="controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.549238 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerName="controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.549397 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" containerName="controller-manager" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.553938 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.562947 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579001 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579088 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579134 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsztf\" (UniqueName: \"kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579167 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579426 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579667 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579697 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r8lwb\" (UniqueName: \"kubernetes.io/projected/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-kube-api-access-r8lwb\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579714 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.579749 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.580567 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.580690 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.590087 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.598836 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsztf\" (UniqueName: \"kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf\") pod \"route-controller-manager-64bc54d77c-hcs4c\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680114 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680156 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680261 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680321 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680341 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9wnd\" (UniqueName: \"kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680506 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config\") pod \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\" (UID: \"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b\") " Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680844 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680896 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.680909 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp" (OuterVolumeSpecName: "tmp") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681225 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681413 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6wz\" (UniqueName: \"kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681496 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681609 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681638 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681721 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681872 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.681964 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config" (OuterVolumeSpecName: "config") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.685226 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd" (OuterVolumeSpecName: "kube-api-access-q9wnd") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "kube-api-access-q9wnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.686596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" (UID: "ee9ac25b-a424-4b76-9acc-bdbacfb0a96b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.707889 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.782969 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783031 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rz6wz\" (UniqueName: \"kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783103 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783141 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783232 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783242 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783251 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783261 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.783271 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9wnd\" (UniqueName: \"kubernetes.io/projected/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b-kube-api-access-q9wnd\") on node \"crc\" DevicePath \"\"" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.784445 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.784690 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.785728 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.786356 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.792750 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.810515 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz6wz\" (UniqueName: \"kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz\") pod \"controller-manager-649b5d48d7-cdcrk\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.878155 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:02 crc kubenswrapper[5099]: I0121 18:19:02.934057 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:19:02 crc kubenswrapper[5099]: W0121 18:19:02.946220 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1628b9b_58d9_4368_bfcd_88a29a79b9d4.slice/crio-2e9cc2a6442c5a87cbbbe7fd9a89c7edd93201caf959f64c6cae2ebc29912af7 WatchSource:0}: Error finding container 2e9cc2a6442c5a87cbbbe7fd9a89c7edd93201caf959f64c6cae2ebc29912af7: Status 404 returned error can't find the container with id 2e9cc2a6442c5a87cbbbe7fd9a89c7edd93201caf959f64c6cae2ebc29912af7 Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.095066 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.214456 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" event={"ID":"39a7d04e-abc7-4b9c-9c51-fce2b2c699b7","Type":"ContainerDied","Data":"bfe7a9869031d5afc3a897de574558439f21a92fec88d08ce3013247c2c7760a"} Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.214526 5099 scope.go:117] "RemoveContainer" containerID="3b57b1abb99b748be06d4d438a11c05d645c21595b02cac76a427d549d875fa8" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.214631 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.224627 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" event={"ID":"a1628b9b-58d9-4368-bfcd-88a29a79b9d4","Type":"ContainerStarted","Data":"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f"} Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.224697 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" event={"ID":"a1628b9b-58d9-4368-bfcd-88a29a79b9d4","Type":"ContainerStarted","Data":"2e9cc2a6442c5a87cbbbe7fd9a89c7edd93201caf959f64c6cae2ebc29912af7"} Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.225295 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.230286 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.230354 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx" event={"ID":"ee9ac25b-a424-4b76-9acc-bdbacfb0a96b","Type":"ContainerDied","Data":"6c56d24925b9585b3b3884e4c0b4c95b17e5e754ad89c4ebbd283102577079cf"} Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.238128 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" event={"ID":"abedadc3-8c47-44fc-81f4-7dbc96610fa0","Type":"ContainerStarted","Data":"3c2be2cbb016a2ed5cad9476d103b6db71569cab8731eeda2cd2482ef50ac47d"} Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.252138 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" podStartSLOduration=2.252109665 podStartE2EDuration="2.252109665s" podCreationTimestamp="2026-01-21 18:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:19:03.246065568 +0000 UTC m=+300.660028029" watchObservedRunningTime="2026-01-21 18:19:03.252109665 +0000 UTC m=+300.666072136" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.258880 5099 scope.go:117] "RemoveContainer" containerID="dd5a51be1341f553106f8a34e5e9f90bdbb7c7c6d73e691bfa512708d08382ea" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.281392 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.290958 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5558b976b5-w6wqd"] Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.298556 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.303592 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cc894f6b5-z2gqx"] Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.920968 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a7d04e-abc7-4b9c-9c51-fce2b2c699b7" path="/var/lib/kubelet/pods/39a7d04e-abc7-4b9c-9c51-fce2b2c699b7/volumes" Jan 21 18:19:03 crc kubenswrapper[5099]: I0121 18:19:03.921861 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9ac25b-a424-4b76-9acc-bdbacfb0a96b" path="/var/lib/kubelet/pods/ee9ac25b-a424-4b76-9acc-bdbacfb0a96b/volumes" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.037096 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.171491 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.171615 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.256403 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" event={"ID":"abedadc3-8c47-44fc-81f4-7dbc96610fa0","Type":"ContainerStarted","Data":"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36"} Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.256844 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.263866 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:19:04 crc kubenswrapper[5099]: I0121 18:19:04.288535 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" podStartSLOduration=3.288507165 podStartE2EDuration="3.288507165s" podCreationTimestamp="2026-01-21 18:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:19:04.278875784 +0000 UTC m=+301.692838245" watchObservedRunningTime="2026-01-21 18:19:04.288507165 +0000 UTC m=+301.702469626" Jan 21 18:19:26 crc kubenswrapper[5099]: I0121 18:19:26.863936 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.956047 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.957664 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fsvr" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="registry-server" containerID="cri-o://4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e" gracePeriod=30 Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.967022 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.967545 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xdblj" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="registry-server" containerID="cri-o://def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36" gracePeriod=30 Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.975333 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.975996 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" containerID="cri-o://3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891" gracePeriod=30 Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.983454 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:20:04 crc kubenswrapper[5099]: I0121 18:20:04.983912 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gg9l" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="registry-server" containerID="cri-o://1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf" gracePeriod=30 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.004313 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.005326 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsx2f" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="registry-server" containerID="cri-o://d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76" gracePeriod=30 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.006903 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x9tj4"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.021178 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x9tj4"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.021360 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.054407 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-lxg2b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.054520 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.175975 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1df72cc8-24fd-4b08-b17a-c5509ed05634-tmp\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.176026 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.176072 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.176190 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkj6\" (UniqueName: \"kubernetes.io/projected/1df72cc8-24fd-4b08-b17a-c5509ed05634-kube-api-access-7lkj6\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.280904 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1df72cc8-24fd-4b08-b17a-c5509ed05634-tmp\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.280973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.281034 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.281065 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkj6\" (UniqueName: \"kubernetes.io/projected/1df72cc8-24fd-4b08-b17a-c5509ed05634-kube-api-access-7lkj6\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.282196 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1df72cc8-24fd-4b08-b17a-c5509ed05634-tmp\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.283036 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.290728 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1df72cc8-24fd-4b08-b17a-c5509ed05634-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.301241 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkj6\" (UniqueName: \"kubernetes.io/projected/1df72cc8-24fd-4b08-b17a-c5509ed05634-kube-api-access-7lkj6\") pod \"marketplace-operator-547dbd544d-x9tj4\" (UID: \"1df72cc8-24fd-4b08-b17a-c5509ed05634\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.424991 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.439897 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.523812 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.527371 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.528467 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.586588 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content\") pod \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.586642 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wq9\" (UniqueName: \"kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9\") pod \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.586862 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities\") pod \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\" (UID: \"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.592939 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9" (OuterVolumeSpecName: "kube-api-access-w4wq9") pod "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" (UID: "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51"). InnerVolumeSpecName "kube-api-access-w4wq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.596523 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities" (OuterVolumeSpecName: "utilities") pod "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" (UID: "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.665047 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.680107 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" (UID: "a5ddd790-cf10-4dfc-a9ed-2ad08824bf51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.689929 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ktb\" (UniqueName: \"kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb\") pod \"97792460-87be-4332-8f5b-dd5e8e2e5d63\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690080 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content\") pod \"97792460-87be-4332-8f5b-dd5e8e2e5d63\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690103 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-576mp\" (UniqueName: \"kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp\") pod \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690129 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca\") pod \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690210 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities\") pod \"97792460-87be-4332-8f5b-dd5e8e2e5d63\" (UID: \"97792460-87be-4332-8f5b-dd5e8e2e5d63\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690277 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp\") pod \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690504 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics\") pod \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690535 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content\") pod \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690582 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities\") pod \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\" (UID: \"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690646 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-588sd\" (UniqueName: \"kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd\") pod \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\" (UID: \"67a0e83c-f043-4329-95ac-4cc0a6ac538f\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690874 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690886 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.690911 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4wq9\" (UniqueName: \"kubernetes.io/projected/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51-kube-api-access-w4wq9\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.691609 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp" (OuterVolumeSpecName: "tmp") pod "67a0e83c-f043-4329-95ac-4cc0a6ac538f" (UID: "67a0e83c-f043-4329-95ac-4cc0a6ac538f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.692411 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "67a0e83c-f043-4329-95ac-4cc0a6ac538f" (UID: "67a0e83c-f043-4329-95ac-4cc0a6ac538f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.692821 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities" (OuterVolumeSpecName: "utilities") pod "97792460-87be-4332-8f5b-dd5e8e2e5d63" (UID: "97792460-87be-4332-8f5b-dd5e8e2e5d63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.693877 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities" (OuterVolumeSpecName: "utilities") pod "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" (UID: "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.694277 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp" (OuterVolumeSpecName: "kube-api-access-576mp") pod "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" (UID: "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8"). InnerVolumeSpecName "kube-api-access-576mp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.698092 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb" (OuterVolumeSpecName: "kube-api-access-p9ktb") pod "97792460-87be-4332-8f5b-dd5e8e2e5d63" (UID: "97792460-87be-4332-8f5b-dd5e8e2e5d63"). InnerVolumeSpecName "kube-api-access-p9ktb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.698221 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "67a0e83c-f043-4329-95ac-4cc0a6ac538f" (UID: "67a0e83c-f043-4329-95ac-4cc0a6ac538f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.707416 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd" (OuterVolumeSpecName: "kube-api-access-588sd") pod "67a0e83c-f043-4329-95ac-4cc0a6ac538f" (UID: "67a0e83c-f043-4329-95ac-4cc0a6ac538f"). InnerVolumeSpecName "kube-api-access-588sd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.708456 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97792460-87be-4332-8f5b-dd5e8e2e5d63" (UID: "97792460-87be-4332-8f5b-dd5e8e2e5d63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.719953 5099 generic.go:358] "Generic (PLEG): container finished" podID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerID="4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e" exitCode=0 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.720070 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerDied","Data":"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.720110 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsvr" event={"ID":"450bc18d-8ddc-42eb-bc2b-0cd44d7198b8","Type":"ContainerDied","Data":"910acc839c2072b6526fe9d5adcb8d0bd7ac871d722b246a67f15aff86d60ad6"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.720135 5099 scope.go:117] "RemoveContainer" containerID="4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.720170 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsvr" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.724460 5099 generic.go:358] "Generic (PLEG): container finished" podID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerID="d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76" exitCode=0 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.724653 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerDied","Data":"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.724830 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsx2f" event={"ID":"28d3b79b-3ce4-427c-834d-9d4b2f9f0601","Type":"ContainerDied","Data":"779b3186531e80fb41012a82cf8100eafab73292ccf5a446c821271aee7a9429"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.724658 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsx2f" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.730214 5099 generic.go:358] "Generic (PLEG): container finished" podID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerID="3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891" exitCode=0 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.730398 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerDied","Data":"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.730426 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" event={"ID":"67a0e83c-f043-4329-95ac-4cc0a6ac538f","Type":"ContainerDied","Data":"433a8e825b48540804ad1e284162b2df710a627c20a99fb203f19e2e83ccb5a3"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.730556 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-lxg2b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.734858 5099 generic.go:358] "Generic (PLEG): container finished" podID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerID="def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36" exitCode=0 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.734919 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerDied","Data":"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.734954 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdblj" event={"ID":"a5ddd790-cf10-4dfc-a9ed-2ad08824bf51","Type":"ContainerDied","Data":"d46ec0805edc99b67ea2249f94d5e897903fff9c684a23e2c4aa1b573d7ca358"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.734957 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdblj" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.739853 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" (UID: "450bc18d-8ddc-42eb-bc2b-0cd44d7198b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.742931 5099 scope.go:117] "RemoveContainer" containerID="d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.762699 5099 generic.go:358] "Generic (PLEG): container finished" podID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerID="1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf" exitCode=0 Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.762765 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerDied","Data":"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.762860 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gg9l" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.762894 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gg9l" event={"ID":"97792460-87be-4332-8f5b-dd5e8e2e5d63","Type":"ContainerDied","Data":"7fc6197adb8e4d14872dfc593fb723db00dded6a80ee129cb4ddb9642898d903"} Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.767006 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.767511 5099 scope.go:117] "RemoveContainer" containerID="6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.775437 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-lxg2b"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.788558 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.792070 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2cp\" (UniqueName: \"kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp\") pod \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.792448 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content\") pod \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.793757 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities\") pod \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\" (UID: \"28d3b79b-3ce4-427c-834d-9d4b2f9f0601\") " Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.794819 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67a0e83c-f043-4329-95ac-4cc0a6ac538f-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.794962 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795066 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795212 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795330 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-588sd\" (UniqueName: \"kubernetes.io/projected/67a0e83c-f043-4329-95ac-4cc0a6ac538f-kube-api-access-588sd\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795458 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9ktb\" (UniqueName: \"kubernetes.io/projected/97792460-87be-4332-8f5b-dd5e8e2e5d63-kube-api-access-p9ktb\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795570 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795681 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-576mp\" (UniqueName: \"kubernetes.io/projected/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8-kube-api-access-576mp\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795813 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67a0e83c-f043-4329-95ac-4cc0a6ac538f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.795928 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97792460-87be-4332-8f5b-dd5e8e2e5d63-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.794826 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities" (OuterVolumeSpecName: "utilities") pod "28d3b79b-3ce4-427c-834d-9d4b2f9f0601" (UID: "28d3b79b-3ce4-427c-834d-9d4b2f9f0601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.798754 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xdblj"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.802841 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.803692 5099 scope.go:117] "RemoveContainer" containerID="4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.804270 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e\": container with ID starting with 4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e not found: ID does not exist" containerID="4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.804430 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e"} err="failed to get container status \"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e\": rpc error: code = NotFound desc = could not find container \"4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e\": container with ID starting with 4c9a2e31555489e21f34e0136a7b06646e4dad4cc1fc51154dc4f184ed4f756e not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.804520 5099 scope.go:117] "RemoveContainer" containerID="d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.805117 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329\": container with ID starting with d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329 not found: ID does not exist" containerID="d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.805163 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329"} err="failed to get container status \"d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329\": rpc error: code = NotFound desc = could not find container \"d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329\": container with ID starting with d7b175f37de9296ba0068a10fca50510714cbc9fd8c3c99a087e8b2a9c040329 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.805190 5099 scope.go:117] "RemoveContainer" containerID="6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.805550 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f\": container with ID starting with 6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f not found: ID does not exist" containerID="6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.805586 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f"} err="failed to get container status \"6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f\": rpc error: code = NotFound desc = could not find container \"6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f\": container with ID starting with 6527d161dad9cd681d5d03f97d6438ea8efe285ef627a5d577b33bc840d90f8f not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.805610 5099 scope.go:117] "RemoveContainer" containerID="d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.806403 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp" (OuterVolumeSpecName: "kube-api-access-wz2cp") pod "28d3b79b-3ce4-427c-834d-9d4b2f9f0601" (UID: "28d3b79b-3ce4-427c-834d-9d4b2f9f0601"). InnerVolumeSpecName "kube-api-access-wz2cp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.808247 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gg9l"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.824399 5099 scope.go:117] "RemoveContainer" containerID="8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.842794 5099 scope.go:117] "RemoveContainer" containerID="d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.857025 5099 scope.go:117] "RemoveContainer" containerID="d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.857755 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76\": container with ID starting with d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76 not found: ID does not exist" containerID="d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.857785 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76"} err="failed to get container status \"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76\": rpc error: code = NotFound desc = could not find container \"d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76\": container with ID starting with d8cc42bcafaa7f00591bae1d081f2dbed3296d3b8591476b8a45c329eb721f76 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.857807 5099 scope.go:117] "RemoveContainer" containerID="8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.858191 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542\": container with ID starting with 8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542 not found: ID does not exist" containerID="8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.858338 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542"} err="failed to get container status \"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542\": rpc error: code = NotFound desc = could not find container \"8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542\": container with ID starting with 8323eef1075a019470e8225c269a356ef1cd9d8ea33dba489865791db39b5542 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.858436 5099 scope.go:117] "RemoveContainer" containerID="d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.859688 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b\": container with ID starting with d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b not found: ID does not exist" containerID="d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.859868 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b"} err="failed to get container status \"d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b\": rpc error: code = NotFound desc = could not find container \"d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b\": container with ID starting with d7b335b1790036f918c921c78ea31360d70e7fb73a9225116ec50d01437dd00b not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.859901 5099 scope.go:117] "RemoveContainer" containerID="3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.874636 5099 scope.go:117] "RemoveContainer" containerID="fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.887822 5099 scope.go:117] "RemoveContainer" containerID="3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.888555 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891\": container with ID starting with 3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891 not found: ID does not exist" containerID="3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.888584 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891"} err="failed to get container status \"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891\": rpc error: code = NotFound desc = could not find container \"3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891\": container with ID starting with 3746f33a6c1203e3da4a3fde1a568114e4234057d4dd34a7937d1e151d21a891 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.888604 5099 scope.go:117] "RemoveContainer" containerID="fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.889038 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19\": container with ID starting with fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19 not found: ID does not exist" containerID="fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.889054 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19"} err="failed to get container status \"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19\": rpc error: code = NotFound desc = could not find container \"fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19\": container with ID starting with fa1f346b9ce96a2f7d4a1d26a91e963bb32b45873f80bb3688ec0d64dae65c19 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.889068 5099 scope.go:117] "RemoveContainer" containerID="def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.897196 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wz2cp\" (UniqueName: \"kubernetes.io/projected/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-kube-api-access-wz2cp\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.897229 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.908427 5099 scope.go:117] "RemoveContainer" containerID="67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.911199 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28d3b79b-3ce4-427c-834d-9d4b2f9f0601" (UID: "28d3b79b-3ce4-427c-834d-9d4b2f9f0601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.923401 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" path="/var/lib/kubelet/pods/67a0e83c-f043-4329-95ac-4cc0a6ac538f/volumes" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.924288 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" path="/var/lib/kubelet/pods/97792460-87be-4332-8f5b-dd5e8e2e5d63/volumes" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.925155 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" path="/var/lib/kubelet/pods/a5ddd790-cf10-4dfc-a9ed-2ad08824bf51/volumes" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.926899 5099 scope.go:117] "RemoveContainer" containerID="af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.950662 5099 scope.go:117] "RemoveContainer" containerID="def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.952946 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36\": container with ID starting with def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36 not found: ID does not exist" containerID="def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.952981 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36"} err="failed to get container status \"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36\": rpc error: code = NotFound desc = could not find container \"def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36\": container with ID starting with def40824eb02a8475858cbf298f5a754aaabe9aeaaebea654de5073d6928cd36 not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.953009 5099 scope.go:117] "RemoveContainer" containerID="67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.953692 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b\": container with ID starting with 67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b not found: ID does not exist" containerID="67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.953721 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b"} err="failed to get container status \"67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b\": rpc error: code = NotFound desc = could not find container \"67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b\": container with ID starting with 67b0b9555da9e2ade9035a90aa1af2f88fcba784b8b51df408cc088df0bc288b not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.953792 5099 scope.go:117] "RemoveContainer" containerID="af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d" Jan 21 18:20:05 crc kubenswrapper[5099]: E0121 18:20:05.956759 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d\": container with ID starting with af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d not found: ID does not exist" containerID="af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.956839 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d"} err="failed to get container status \"af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d\": rpc error: code = NotFound desc = could not find container \"af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d\": container with ID starting with af0fd80aaccd37345e55bda5b321f097baa0df0cc700677a32d539575a37449d not found: ID does not exist" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.956870 5099 scope.go:117] "RemoveContainer" containerID="1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.963042 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x9tj4"] Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.973352 5099 scope.go:117] "RemoveContainer" containerID="6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0" Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.984300 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:20:05 crc kubenswrapper[5099]: I0121 18:20:05.991182 5099 scope.go:117] "RemoveContainer" containerID="0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:05.999997 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d3b79b-3ce4-427c-834d-9d4b2f9f0601-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.022774 5099 scope.go:117] "RemoveContainer" containerID="1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf" Jan 21 18:20:06 crc kubenswrapper[5099]: E0121 18:20:06.023406 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf\": container with ID starting with 1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf not found: ID does not exist" containerID="1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.023463 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf"} err="failed to get container status \"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf\": rpc error: code = NotFound desc = could not find container \"1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf\": container with ID starting with 1694c543ce20282c680cee52b0c83185eab164807db3301aa6bd16c94d59b5cf not found: ID does not exist" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.023529 5099 scope.go:117] "RemoveContainer" containerID="6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0" Jan 21 18:20:06 crc kubenswrapper[5099]: E0121 18:20:06.024537 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0\": container with ID starting with 6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0 not found: ID does not exist" containerID="6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.024612 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0"} err="failed to get container status \"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0\": rpc error: code = NotFound desc = could not find container \"6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0\": container with ID starting with 6c28dc402d9f28846a2abf5780285789f6afbdae9ac2c4ceb89461f83c80a4b0 not found: ID does not exist" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.024629 5099 scope.go:117] "RemoveContainer" containerID="0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc" Jan 21 18:20:06 crc kubenswrapper[5099]: E0121 18:20:06.025134 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc\": container with ID starting with 0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc not found: ID does not exist" containerID="0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.025304 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc"} err="failed to get container status \"0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc\": rpc error: code = NotFound desc = could not find container \"0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc\": container with ID starting with 0068f289149287fcd6feaa6b33ed4948a8ad3f690eec5ed887ed1521d34b1ebc not found: ID does not exist" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.046936 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.051931 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fsvr"] Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.066757 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.075913 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsx2f"] Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.781012 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" event={"ID":"1df72cc8-24fd-4b08-b17a-c5509ed05634","Type":"ContainerStarted","Data":"b5f0c1f0bb1639db88ffb698aabd6a6e766d8799bee5e527f219c1102d41f18f"} Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.781079 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" event={"ID":"1df72cc8-24fd-4b08-b17a-c5509ed05634","Type":"ContainerStarted","Data":"c7f2ab4eea23107320b8e14c3b95a96d95ce6560338177f2f2c3bd7f308dfc02"} Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.781488 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.785817 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" Jan 21 18:20:06 crc kubenswrapper[5099]: I0121 18:20:06.804480 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-x9tj4" podStartSLOduration=2.804449617 podStartE2EDuration="2.804449617s" podCreationTimestamp="2026-01-21 18:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:20:06.801022319 +0000 UTC m=+364.214984871" watchObservedRunningTime="2026-01-21 18:20:06.804449617 +0000 UTC m=+364.218412108" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.155289 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156763 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156785 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156800 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156809 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156824 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156835 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156846 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156853 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156867 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156874 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156893 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156901 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156910 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156918 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156930 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156938 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156947 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156957 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156967 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156975 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156991 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.156999 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="extract-content" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157011 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157018 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157039 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157047 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157056 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157065 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="extract-utilities" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157193 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157212 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157222 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="97792460-87be-4332-8f5b-dd5e8e2e5d63" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157235 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157244 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5ddd790-cf10-4dfc-a9ed-2ad08824bf51" containerName="registry-server" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.157256 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="67a0e83c-f043-4329-95ac-4cc0a6ac538f" containerName="marketplace-operator" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.182928 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.184090 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.187284 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.319706 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.320219 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hp9\" (UniqueName: \"kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.320414 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.358931 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvjkq"] Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.365251 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.369433 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvjkq"] Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.371448 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.421652 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hp9\" (UniqueName: \"kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.421766 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.421814 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.422632 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.422696 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.449163 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hp9\" (UniqueName: \"kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9\") pod \"redhat-marketplace-pcdgg\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.508392 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.523621 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-catalog-content\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.523830 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-utilities\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.524184 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5pl\" (UniqueName: \"kubernetes.io/projected/4cd50145-5d14-4eb5-8b45-d5c10f38600a-kube-api-access-sz5pl\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.629079 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5pl\" (UniqueName: \"kubernetes.io/projected/4cd50145-5d14-4eb5-8b45-d5c10f38600a-kube-api-access-sz5pl\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.629441 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-catalog-content\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.629500 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-utilities\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.629940 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-catalog-content\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.630003 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd50145-5d14-4eb5-8b45-d5c10f38600a-utilities\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.653858 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5pl\" (UniqueName: \"kubernetes.io/projected/4cd50145-5d14-4eb5-8b45-d5c10f38600a-kube-api-access-sz5pl\") pod \"redhat-operators-xvjkq\" (UID: \"4cd50145-5d14-4eb5-8b45-d5c10f38600a\") " pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.681789 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.920118 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d3b79b-3ce4-427c-834d-9d4b2f9f0601" path="/var/lib/kubelet/pods/28d3b79b-3ce4-427c-834d-9d4b2f9f0601/volumes" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.920914 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450bc18d-8ddc-42eb-bc2b-0cd44d7198b8" path="/var/lib/kubelet/pods/450bc18d-8ddc-42eb-bc2b-0cd44d7198b8/volumes" Jan 21 18:20:07 crc kubenswrapper[5099]: I0121 18:20:07.985727 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:20:07 crc kubenswrapper[5099]: W0121 18:20:07.993250 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4001d3a_1cc5_473a_a83f_7ae904042d7d.slice/crio-2e2e61a4ed6c15cac010771f879aaaa7e71631fa22042d06eeed7e6e70fd344e WatchSource:0}: Error finding container 2e2e61a4ed6c15cac010771f879aaaa7e71631fa22042d06eeed7e6e70fd344e: Status 404 returned error can't find the container with id 2e2e61a4ed6c15cac010771f879aaaa7e71631fa22042d06eeed7e6e70fd344e Jan 21 18:20:08 crc kubenswrapper[5099]: W0121 18:20:08.102871 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cd50145_5d14_4eb5_8b45_d5c10f38600a.slice/crio-546c238774d12e7f823483fc844f88a6224964f5700b511d1c89845f255015da WatchSource:0}: Error finding container 546c238774d12e7f823483fc844f88a6224964f5700b511d1c89845f255015da: Status 404 returned error can't find the container with id 546c238774d12e7f823483fc844f88a6224964f5700b511d1c89845f255015da Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.105504 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvjkq"] Jan 21 18:20:08 crc kubenswrapper[5099]: E0121 18:20:08.206144 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4001d3a_1cc5_473a_a83f_7ae904042d7d.slice/crio-872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4001d3a_1cc5_473a_a83f_7ae904042d7d.slice/crio-conmon-872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff.scope\": RecentStats: unable to find data in memory cache]" Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.809244 5099 generic.go:358] "Generic (PLEG): container finished" podID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerID="872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff" exitCode=0 Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.809637 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerDied","Data":"872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff"} Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.810185 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerStarted","Data":"2e2e61a4ed6c15cac010771f879aaaa7e71631fa22042d06eeed7e6e70fd344e"} Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.812922 5099 generic.go:358] "Generic (PLEG): container finished" podID="4cd50145-5d14-4eb5-8b45-d5c10f38600a" containerID="1d58f11a81fc325d3d8851bfe237e2f57b731775d85bde0c751764dea5a566b9" exitCode=0 Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.814902 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvjkq" event={"ID":"4cd50145-5d14-4eb5-8b45-d5c10f38600a","Type":"ContainerDied","Data":"1d58f11a81fc325d3d8851bfe237e2f57b731775d85bde0c751764dea5a566b9"} Jan 21 18:20:08 crc kubenswrapper[5099]: I0121 18:20:08.814950 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvjkq" event={"ID":"4cd50145-5d14-4eb5-8b45-d5c10f38600a","Type":"ContainerStarted","Data":"546c238774d12e7f823483fc844f88a6224964f5700b511d1c89845f255015da"} Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.428660 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8brd"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.437122 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.444802 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8brd"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557022 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3873b4a3-5584-45b1-9a08-3ebf7192da64-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557068 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557107 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557144 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3873b4a3-5584-45b1-9a08-3ebf7192da64-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557167 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557213 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557258 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-tls\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557275 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6dj\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-kube-api-access-bf6dj\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.557316 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.562537 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.566190 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.574881 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.590105 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.658786 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmlk\" (UniqueName: \"kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.658837 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.658868 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-tls\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.658916 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6dj\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-kube-api-access-bf6dj\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.658935 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660311 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3873b4a3-5584-45b1-9a08-3ebf7192da64-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660415 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3873b4a3-5584-45b1-9a08-3ebf7192da64-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660796 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.660917 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3873b4a3-5584-45b1-9a08-3ebf7192da64-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.662677 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.663004 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3873b4a3-5584-45b1-9a08-3ebf7192da64-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.672536 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3873b4a3-5584-45b1-9a08-3ebf7192da64-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.672717 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-registry-tls\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.679179 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6dj\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-kube-api-access-bf6dj\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.685138 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3873b4a3-5584-45b1-9a08-3ebf7192da64-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8brd\" (UID: \"3873b4a3-5584-45b1-9a08-3ebf7192da64\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.754869 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.760031 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.764027 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmlk\" (UniqueName: \"kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.764111 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.764239 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.765295 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.765769 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.765945 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.768676 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.769858 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.784705 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmlk\" (UniqueName: \"kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk\") pod \"community-operators-9hbr9\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.824398 5099 generic.go:358] "Generic (PLEG): container finished" podID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerID="d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0" exitCode=0 Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.824490 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerDied","Data":"d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0"} Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.865762 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.865810 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.865905 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxd2g\" (UniqueName: \"kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.906620 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.966680 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.966758 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.966867 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxd2g\" (UniqueName: \"kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.967654 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.967954 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:09 crc kubenswrapper[5099]: I0121 18:20:09.990098 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxd2g\" (UniqueName: \"kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g\") pod \"certified-operators-mgt6l\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.139562 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.154070 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.203422 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8brd"] Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.409809 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.832495 5099 generic.go:358] "Generic (PLEG): container finished" podID="4cd50145-5d14-4eb5-8b45-d5c10f38600a" containerID="536a91d1bdcda2821342bd2ea3019de634d2f46ad7a528e03c6dc6cb0f3f639a" exitCode=0 Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.832591 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvjkq" event={"ID":"4cd50145-5d14-4eb5-8b45-d5c10f38600a","Type":"ContainerDied","Data":"536a91d1bdcda2821342bd2ea3019de634d2f46ad7a528e03c6dc6cb0f3f639a"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.835468 5099 generic.go:358] "Generic (PLEG): container finished" podID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerID="e57ed611df7f2ae6b01ebbefe199af78cbfe99d70c13e364967cecf9f0015a37" exitCode=0 Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.835610 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerDied","Data":"e57ed611df7f2ae6b01ebbefe199af78cbfe99d70c13e364967cecf9f0015a37"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.835651 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerStarted","Data":"08e52abc13ad8ae2e68e0cffe1f5c74d463d54585ac73a7664783c43e15062ec"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.841163 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerStarted","Data":"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.842271 5099 generic.go:358] "Generic (PLEG): container finished" podID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerID="5d05187f838e7781dd4ceb901382c757c88ee00147beacdb93dc12bbaaebac18" exitCode=0 Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.842378 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerDied","Data":"5d05187f838e7781dd4ceb901382c757c88ee00147beacdb93dc12bbaaebac18"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.842409 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerStarted","Data":"033d2a43a2f868fde23efb8d671704cdbc44759941c8cf78d90e9e70af070f69"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.846639 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" event={"ID":"3873b4a3-5584-45b1-9a08-3ebf7192da64","Type":"ContainerStarted","Data":"ddfea7f7582ed6ae0f8ada269e92528a2e76c60b95311fea2d064f17c223a762"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.846684 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" event={"ID":"3873b4a3-5584-45b1-9a08-3ebf7192da64","Type":"ContainerStarted","Data":"8dc5ce9cadc5a17a2e712a31d297d8af44e535602310a958907d4f8362f2d1a8"} Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.847366 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.919283 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.919691 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" podUID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" containerName="controller-manager" containerID="cri-o://c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36" gracePeriod=30 Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.950051 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pcdgg" podStartSLOduration=3.215565537 podStartE2EDuration="3.95003051s" podCreationTimestamp="2026-01-21 18:20:07 +0000 UTC" firstStartedPulling="2026-01-21 18:20:08.812954005 +0000 UTC m=+366.226916466" lastFinishedPulling="2026-01-21 18:20:09.547418978 +0000 UTC m=+366.961381439" observedRunningTime="2026-01-21 18:20:10.949771214 +0000 UTC m=+368.363733675" watchObservedRunningTime="2026-01-21 18:20:10.95003051 +0000 UTC m=+368.363992971" Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.981018 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" podStartSLOduration=1.981002217 podStartE2EDuration="1.981002217s" podCreationTimestamp="2026-01-21 18:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:20:10.978253947 +0000 UTC m=+368.392216408" watchObservedRunningTime="2026-01-21 18:20:10.981002217 +0000 UTC m=+368.394964678" Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.992689 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:20:10 crc kubenswrapper[5099]: I0121 18:20:10.992969 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" podUID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" containerName="route-controller-manager" containerID="cri-o://76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f" gracePeriod=30 Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.356677 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.387913 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f5c76b7d9-477hq"] Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.388512 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" containerName="controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.388533 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" containerName="controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.388642 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" containerName="controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.397959 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.403031 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.417323 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5c76b7d9-477hq"] Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.470188 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th"] Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.471063 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" containerName="route-controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.471143 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" containerName="route-controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.471328 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" containerName="route-controller-manager" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.476929 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.479083 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th"] Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487553 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487605 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert\") pod \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487662 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487697 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca\") pod \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487721 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config\") pod \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487769 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp\") pod \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487796 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487889 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487954 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.487977 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz6wz\" (UniqueName: \"kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz\") pod \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\" (UID: \"abedadc3-8c47-44fc-81f4-7dbc96610fa0\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488019 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsztf\" (UniqueName: \"kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf\") pod \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\" (UID: \"a1628b9b-58d9-4368-bfcd-88a29a79b9d4\") " Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488131 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-client-ca\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488151 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-serving-cert\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-tmp\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488231 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-proxy-ca-bundles\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488259 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4hz\" (UniqueName: \"kubernetes.io/projected/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-kube-api-access-xh4hz\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488299 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-config\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488564 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp" (OuterVolumeSpecName: "tmp") pod "a1628b9b-58d9-4368-bfcd-88a29a79b9d4" (UID: "a1628b9b-58d9-4368-bfcd-88a29a79b9d4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488593 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config" (OuterVolumeSpecName: "config") pod "a1628b9b-58d9-4368-bfcd-88a29a79b9d4" (UID: "a1628b9b-58d9-4368-bfcd-88a29a79b9d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488595 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488659 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config" (OuterVolumeSpecName: "config") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488937 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1628b9b-58d9-4368-bfcd-88a29a79b9d4" (UID: "a1628b9b-58d9-4368-bfcd-88a29a79b9d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.488971 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca" (OuterVolumeSpecName: "client-ca") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.489183 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp" (OuterVolumeSpecName: "tmp") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.495987 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.496484 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf" (OuterVolumeSpecName: "kube-api-access-lsztf") pod "a1628b9b-58d9-4368-bfcd-88a29a79b9d4" (UID: "a1628b9b-58d9-4368-bfcd-88a29a79b9d4"). InnerVolumeSpecName "kube-api-access-lsztf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.496952 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1628b9b-58d9-4368-bfcd-88a29a79b9d4" (UID: "a1628b9b-58d9-4368-bfcd-88a29a79b9d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.497230 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz" (OuterVolumeSpecName: "kube-api-access-rz6wz") pod "abedadc3-8c47-44fc-81f4-7dbc96610fa0" (UID: "abedadc3-8c47-44fc-81f4-7dbc96610fa0"). InnerVolumeSpecName "kube-api-access-rz6wz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.589494 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-client-ca\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590037 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-config\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590073 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7161ab8-9736-44f6-aa9d-ace917ce98a3-tmp\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590137 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-tmp\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590189 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-proxy-ca-bundles\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590221 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4hz\" (UniqueName: \"kubernetes.io/projected/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-kube-api-access-xh4hz\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590260 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-config\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590286 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7161ab8-9736-44f6-aa9d-ace917ce98a3-serving-cert\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-client-ca\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590339 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmg76\" (UniqueName: \"kubernetes.io/projected/c7161ab8-9736-44f6-aa9d-ace917ce98a3-kube-api-access-xmg76\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590360 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-serving-cert\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590411 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590422 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590432 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590442 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590453 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abedadc3-8c47-44fc-81f4-7dbc96610fa0-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590462 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590471 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abedadc3-8c47-44fc-81f4-7dbc96610fa0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590481 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz6wz\" (UniqueName: \"kubernetes.io/projected/abedadc3-8c47-44fc-81f4-7dbc96610fa0-kube-api-access-rz6wz\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590492 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lsztf\" (UniqueName: \"kubernetes.io/projected/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-kube-api-access-lsztf\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590502 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abedadc3-8c47-44fc-81f4-7dbc96610fa0-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.590513 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1628b9b-58d9-4368-bfcd-88a29a79b9d4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.592281 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-config\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.592429 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-client-ca\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.592630 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-tmp\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.593031 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-proxy-ca-bundles\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.596134 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-serving-cert\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.608308 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4hz\" (UniqueName: \"kubernetes.io/projected/dbb87b58-ba29-49b3-83d3-ad61e46b05ef-kube-api-access-xh4hz\") pod \"controller-manager-6f5c76b7d9-477hq\" (UID: \"dbb87b58-ba29-49b3-83d3-ad61e46b05ef\") " pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.691627 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7161ab8-9736-44f6-aa9d-ace917ce98a3-tmp\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.691812 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7161ab8-9736-44f6-aa9d-ace917ce98a3-serving-cert\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.691873 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmg76\" (UniqueName: \"kubernetes.io/projected/c7161ab8-9736-44f6-aa9d-ace917ce98a3-kube-api-access-xmg76\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.692175 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-client-ca\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.692372 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-config\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.693135 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7161ab8-9736-44f6-aa9d-ace917ce98a3-tmp\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.694675 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-client-ca\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.695395 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7161ab8-9736-44f6-aa9d-ace917ce98a3-config\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.706327 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7161ab8-9736-44f6-aa9d-ace917ce98a3-serving-cert\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.710189 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmg76\" (UniqueName: \"kubernetes.io/projected/c7161ab8-9736-44f6-aa9d-ace917ce98a3-kube-api-access-xmg76\") pod \"route-controller-manager-6698b676d9-jp8th\" (UID: \"c7161ab8-9736-44f6-aa9d-ace917ce98a3\") " pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.712473 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.791315 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.875537 5099 generic.go:358] "Generic (PLEG): container finished" podID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" containerID="c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36" exitCode=0 Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.875914 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.875939 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" event={"ID":"abedadc3-8c47-44fc-81f4-7dbc96610fa0","Type":"ContainerDied","Data":"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36"} Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.878730 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649b5d48d7-cdcrk" event={"ID":"abedadc3-8c47-44fc-81f4-7dbc96610fa0","Type":"ContainerDied","Data":"3c2be2cbb016a2ed5cad9476d103b6db71569cab8731eeda2cd2482ef50ac47d"} Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.878809 5099 scope.go:117] "RemoveContainer" containerID="c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.943406 5099 generic.go:358] "Generic (PLEG): container finished" podID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" containerID="76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f" exitCode=0 Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.943674 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.955910 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvjkq" event={"ID":"4cd50145-5d14-4eb5-8b45-d5c10f38600a","Type":"ContainerStarted","Data":"0bbf569e3cdca5836d7407a903ba2a4fa24b7cf606b8e2a4318493c6dac21303"} Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.955972 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" event={"ID":"a1628b9b-58d9-4368-bfcd-88a29a79b9d4","Type":"ContainerDied","Data":"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f"} Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.956017 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c" event={"ID":"a1628b9b-58d9-4368-bfcd-88a29a79b9d4","Type":"ContainerDied","Data":"2e9cc2a6442c5a87cbbbe7fd9a89c7edd93201caf959f64c6cae2ebc29912af7"} Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.974759 5099 scope.go:117] "RemoveContainer" containerID="c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.976558 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvjkq" podStartSLOduration=4.074698648 podStartE2EDuration="4.976542835s" podCreationTimestamp="2026-01-21 18:20:07 +0000 UTC" firstStartedPulling="2026-01-21 18:20:08.814672019 +0000 UTC m=+366.228634480" lastFinishedPulling="2026-01-21 18:20:09.716516206 +0000 UTC m=+367.130478667" observedRunningTime="2026-01-21 18:20:11.955545241 +0000 UTC m=+369.369507712" watchObservedRunningTime="2026-01-21 18:20:11.976542835 +0000 UTC m=+369.390505286" Jan 21 18:20:11 crc kubenswrapper[5099]: E0121 18:20:11.980027 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36\": container with ID starting with c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36 not found: ID does not exist" containerID="c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.980086 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36"} err="failed to get container status \"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36\": rpc error: code = NotFound desc = could not find container \"c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36\": container with ID starting with c8e88ac8545650e31673fd81b4d89f73034cdc9402947a0f3cf26fed8cdc4d36 not found: ID does not exist" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.980113 5099 scope.go:117] "RemoveContainer" containerID="76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f" Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.985400 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:20:11 crc kubenswrapper[5099]: I0121 18:20:11.991810 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-649b5d48d7-cdcrk"] Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.033107 5099 scope.go:117] "RemoveContainer" containerID="76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f" Jan 21 18:20:12 crc kubenswrapper[5099]: E0121 18:20:12.033946 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f\": container with ID starting with 76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f not found: ID does not exist" containerID="76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f" Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.033978 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f"} err="failed to get container status \"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f\": rpc error: code = NotFound desc = could not find container \"76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f\": container with ID starting with 76ec295d340996d7b9974bbc4cd94c8f8829685e7e40f341361da38623b7365f not found: ID does not exist" Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.044935 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.054494 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bc54d77c-hcs4c"] Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.188513 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5c76b7d9-477hq"] Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.307079 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th"] Jan 21 18:20:12 crc kubenswrapper[5099]: W0121 18:20:12.377986 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7161ab8_9736_44f6_aa9d_ace917ce98a3.slice/crio-f0a3faa554926d2d70c3fb313bc1dcffd923a54bf02216eb9757175bc98d2e21 WatchSource:0}: Error finding container f0a3faa554926d2d70c3fb313bc1dcffd923a54bf02216eb9757175bc98d2e21: Status 404 returned error can't find the container with id f0a3faa554926d2d70c3fb313bc1dcffd923a54bf02216eb9757175bc98d2e21 Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.953296 5099 generic.go:358] "Generic (PLEG): container finished" podID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerID="11b00e01f72e5410f717ed7544fee38cab08dd99a3e7953f9bce0152c673aaba" exitCode=0 Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.953363 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerDied","Data":"11b00e01f72e5410f717ed7544fee38cab08dd99a3e7953f9bce0152c673aaba"} Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.956320 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" event={"ID":"c7161ab8-9736-44f6-aa9d-ace917ce98a3","Type":"ContainerStarted","Data":"10b5490b3d6b8707ede95938dea4c107bfa7369c9fa892d986d4826c574b2b98"} Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.956346 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" event={"ID":"c7161ab8-9736-44f6-aa9d-ace917ce98a3","Type":"ContainerStarted","Data":"f0a3faa554926d2d70c3fb313bc1dcffd923a54bf02216eb9757175bc98d2e21"} Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.958333 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.960695 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" event={"ID":"dbb87b58-ba29-49b3-83d3-ad61e46b05ef","Type":"ContainerStarted","Data":"20d45b352582670bfb969713d7e35abebc96eb6518e1b6007ee6c60ace45fbb8"} Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.960800 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" event={"ID":"dbb87b58-ba29-49b3-83d3-ad61e46b05ef","Type":"ContainerStarted","Data":"4d1adce82c8e61dd863c65da6eb53d33d2a01d96ef09d5b2a008381b0f7732b7"} Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.961843 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.965616 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.967673 5099 generic.go:358] "Generic (PLEG): container finished" podID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerID="55c9623d4fe4213f6abad7722a68ec5e010a6da82d526f22511aac30accb5825" exitCode=0 Jan 21 18:20:12 crc kubenswrapper[5099]: I0121 18:20:12.967878 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerDied","Data":"55c9623d4fe4213f6abad7722a68ec5e010a6da82d526f22511aac30accb5825"} Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.033363 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f5c76b7d9-477hq" podStartSLOduration=3.033338899 podStartE2EDuration="3.033338899s" podCreationTimestamp="2026-01-21 18:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:20:13.026964277 +0000 UTC m=+370.440926758" watchObservedRunningTime="2026-01-21 18:20:13.033338899 +0000 UTC m=+370.447301360" Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.061941 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" podStartSLOduration=2.061918235 podStartE2EDuration="2.061918235s" podCreationTimestamp="2026-01-21 18:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:20:13.05854526 +0000 UTC m=+370.472507711" watchObservedRunningTime="2026-01-21 18:20:13.061918235 +0000 UTC m=+370.475880696" Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.241012 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6698b676d9-jp8th" Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.921901 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1628b9b-58d9-4368-bfcd-88a29a79b9d4" path="/var/lib/kubelet/pods/a1628b9b-58d9-4368-bfcd-88a29a79b9d4/volumes" Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.922444 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abedadc3-8c47-44fc-81f4-7dbc96610fa0" path="/var/lib/kubelet/pods/abedadc3-8c47-44fc-81f4-7dbc96610fa0/volumes" Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.976730 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerStarted","Data":"79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0"} Jan 21 18:20:13 crc kubenswrapper[5099]: I0121 18:20:13.979449 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerStarted","Data":"f4f67cd0d06c0b5d4992d7bc431422c535fc4cfb8439a1427898d3509a0b9a3e"} Jan 21 18:20:14 crc kubenswrapper[5099]: I0121 18:20:14.009567 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgt6l" podStartSLOduration=4.103231676 podStartE2EDuration="5.009513135s" podCreationTimestamp="2026-01-21 18:20:09 +0000 UTC" firstStartedPulling="2026-01-21 18:20:10.844051927 +0000 UTC m=+368.258014388" lastFinishedPulling="2026-01-21 18:20:11.750333386 +0000 UTC m=+369.164295847" observedRunningTime="2026-01-21 18:20:14.002382573 +0000 UTC m=+371.416345044" watchObservedRunningTime="2026-01-21 18:20:14.009513135 +0000 UTC m=+371.423475596" Jan 21 18:20:14 crc kubenswrapper[5099]: I0121 18:20:14.028074 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hbr9" podStartSLOduration=4.02157105 podStartE2EDuration="5.028046395s" podCreationTimestamp="2026-01-21 18:20:09 +0000 UTC" firstStartedPulling="2026-01-21 18:20:10.836321851 +0000 UTC m=+368.250284312" lastFinishedPulling="2026-01-21 18:20:11.842797196 +0000 UTC m=+369.256759657" observedRunningTime="2026-01-21 18:20:14.023310295 +0000 UTC m=+371.437272776" watchObservedRunningTime="2026-01-21 18:20:14.028046395 +0000 UTC m=+371.442008856" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.509486 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.513763 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.570651 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.681995 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.682476 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:17 crc kubenswrapper[5099]: I0121 18:20:17.741899 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:18 crc kubenswrapper[5099]: I0121 18:20:18.060578 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvjkq" Jan 21 18:20:18 crc kubenswrapper[5099]: I0121 18:20:18.237922 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:20:19 crc kubenswrapper[5099]: I0121 18:20:19.907801 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:19 crc kubenswrapper[5099]: I0121 18:20:19.908757 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:19 crc kubenswrapper[5099]: I0121 18:20:19.970680 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:20 crc kubenswrapper[5099]: I0121 18:20:20.062332 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:20:20 crc kubenswrapper[5099]: I0121 18:20:20.154766 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:20 crc kubenswrapper[5099]: I0121 18:20:20.155423 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:20 crc kubenswrapper[5099]: I0121 18:20:20.204972 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:21 crc kubenswrapper[5099]: I0121 18:20:21.090091 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:20:32 crc kubenswrapper[5099]: I0121 18:20:32.976124 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8brd" Jan 21 18:20:33 crc kubenswrapper[5099]: I0121 18:20:33.039460 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:20:52 crc kubenswrapper[5099]: I0121 18:20:52.065407 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:20:52 crc kubenswrapper[5099]: I0121 18:20:52.066088 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.094884 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" podUID="90ce37a0-d38f-4712-89f0-8572a04c303d" containerName="registry" containerID="cri-o://5bd603a21814f9bf4bef85f84ca0bba031d42b79c8a0b15414fea6e193421340" gracePeriod=30 Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.415418 5099 generic.go:358] "Generic (PLEG): container finished" podID="90ce37a0-d38f-4712-89f0-8572a04c303d" containerID="5bd603a21814f9bf4bef85f84ca0bba031d42b79c8a0b15414fea6e193421340" exitCode=0 Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.415516 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" event={"ID":"90ce37a0-d38f-4712-89f0-8572a04c303d","Type":"ContainerDied","Data":"5bd603a21814f9bf4bef85f84ca0bba031d42b79c8a0b15414fea6e193421340"} Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.613777 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708407 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2r7t\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708479 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708522 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708577 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708615 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708654 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.708839 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token\") pod \"90ce37a0-d38f-4712-89f0-8572a04c303d\" (UID: \"90ce37a0-d38f-4712-89f0-8572a04c303d\") " Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.709898 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.710416 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.716279 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.718840 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t" (OuterVolumeSpecName: "kube-api-access-n2r7t") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "kube-api-access-n2r7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.718926 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.719768 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.721670 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.727797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "90ce37a0-d38f-4712-89f0-8572a04c303d" (UID: "90ce37a0-d38f-4712-89f0-8572a04c303d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810352 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90ce37a0-d38f-4712-89f0-8572a04c303d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810431 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810459 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n2r7t\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-kube-api-access-n2r7t\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810484 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810512 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90ce37a0-d38f-4712-89f0-8572a04c303d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810533 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ce37a0-d38f-4712-89f0-8572a04c303d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:58 crc kubenswrapper[5099]: I0121 18:20:58.810549 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90ce37a0-d38f-4712-89f0-8572a04c303d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.440233 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" event={"ID":"90ce37a0-d38f-4712-89f0-8572a04c303d","Type":"ContainerDied","Data":"20a0a30e793feb11419721e34e5f638fc2dccd9cbfcfe4e7f600de83788284f9"} Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.440301 5099 scope.go:117] "RemoveContainer" containerID="5bd603a21814f9bf4bef85f84ca0bba031d42b79c8a0b15414fea6e193421340" Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.440343 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tjl2r" Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.487789 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.490594 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tjl2r"] Jan 21 18:20:59 crc kubenswrapper[5099]: I0121 18:20:59.925627 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ce37a0-d38f-4712-89f0-8572a04c303d" path="/var/lib/kubelet/pods/90ce37a0-d38f-4712-89f0-8572a04c303d/volumes" Jan 21 18:21:22 crc kubenswrapper[5099]: I0121 18:21:22.065375 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:21:22 crc kubenswrapper[5099]: I0121 18:21:22.065933 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.065624 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.067543 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.067646 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.068418 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.068491 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b" gracePeriod=600 Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.826178 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b" exitCode=0 Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.826319 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b"} Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.827364 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c"} Jan 21 18:21:52 crc kubenswrapper[5099]: I0121 18:21:52.827451 5099 scope.go:117] "RemoveContainer" containerID="554ace079195fe7f2ecf4de1b40c0c4549e4632325ec6988cab9cec5c62f4f7b" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.188718 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483662-6z2vh"] Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.193119 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90ce37a0-d38f-4712-89f0-8572a04c303d" containerName="registry" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.193168 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ce37a0-d38f-4712-89f0-8572a04c303d" containerName="registry" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.193438 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="90ce37a0-d38f-4712-89f0-8572a04c303d" containerName="registry" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.198177 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.201386 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.201467 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.201826 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.207705 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483662-6z2vh"] Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.315481 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2tw\" (UniqueName: \"kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw\") pod \"auto-csr-approver-29483662-6z2vh\" (UID: \"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95\") " pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.417055 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2tw\" (UniqueName: \"kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw\") pod \"auto-csr-approver-29483662-6z2vh\" (UID: \"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95\") " pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.444959 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2tw\" (UniqueName: \"kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw\") pod \"auto-csr-approver-29483662-6z2vh\" (UID: \"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95\") " pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.526928 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:00 crc kubenswrapper[5099]: I0121 18:22:00.977704 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483662-6z2vh"] Jan 21 18:22:01 crc kubenswrapper[5099]: I0121 18:22:01.893161 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" event={"ID":"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95","Type":"ContainerStarted","Data":"011cac2b1708ed73202ae39f540c4926e19f8d4bd5a85403b9870e5fdc42a19e"} Jan 21 18:22:04 crc kubenswrapper[5099]: I0121 18:22:04.851627 5099 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-z2766" Jan 21 18:22:04 crc kubenswrapper[5099]: I0121 18:22:04.874336 5099 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-z2766" Jan 21 18:22:04 crc kubenswrapper[5099]: I0121 18:22:04.918844 5099 generic.go:358] "Generic (PLEG): container finished" podID="6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" containerID="a188f09831633ff3332f76f220f223accd559d5d7c87ade9a5f39b641e4d24ac" exitCode=0 Jan 21 18:22:04 crc kubenswrapper[5099]: I0121 18:22:04.919147 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" event={"ID":"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95","Type":"ContainerDied","Data":"a188f09831633ff3332f76f220f223accd559d5d7c87ade9a5f39b641e4d24ac"} Jan 21 18:22:05 crc kubenswrapper[5099]: I0121 18:22:05.876705 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 18:17:04 +0000 UTC" deadline="2026-02-17 11:25:49.581136735 +0000 UTC" Jan 21 18:22:05 crc kubenswrapper[5099]: I0121 18:22:05.876850 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="641h3m43.704293192s" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.151515 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.304947 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2tw\" (UniqueName: \"kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw\") pod \"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95\" (UID: \"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95\") " Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.315621 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw" (OuterVolumeSpecName: "kube-api-access-bt2tw") pod "6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" (UID: "6a4c39e6-db4a-40b4-b7b5-a50799c8ba95"). InnerVolumeSpecName "kube-api-access-bt2tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.407281 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bt2tw\" (UniqueName: \"kubernetes.io/projected/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95-kube-api-access-bt2tw\") on node \"crc\" DevicePath \"\"" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.877176 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 18:17:04 +0000 UTC" deadline="2026-02-14 13:07:49.3500981 +0000 UTC" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.877243 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="570h45m42.472859814s" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.933067 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.933104 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483662-6z2vh" event={"ID":"6a4c39e6-db4a-40b4-b7b5-a50799c8ba95","Type":"ContainerDied","Data":"011cac2b1708ed73202ae39f540c4926e19f8d4bd5a85403b9870e5fdc42a19e"} Jan 21 18:22:06 crc kubenswrapper[5099]: I0121 18:22:06.933150 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011cac2b1708ed73202ae39f540c4926e19f8d4bd5a85403b9870e5fdc42a19e" Jan 21 18:23:03 crc kubenswrapper[5099]: I0121 18:23:03.977621 5099 scope.go:117] "RemoveContainer" containerID="7be0581f185328c6af4421036de97f76f662ec44ee2d376efb2cd0225cd73475" Jan 21 18:23:04 crc kubenswrapper[5099]: I0121 18:23:04.006653 5099 scope.go:117] "RemoveContainer" containerID="329f0d0d946768f6cdc3add419d1dc54ab9bb87976d3a87f6ba162d6d483a8b1" Jan 21 18:23:04 crc kubenswrapper[5099]: I0121 18:23:04.033991 5099 scope.go:117] "RemoveContainer" containerID="d10e7b5ede91b4ac4524ed46b4972484ecd78e91e91cc21c2bfe49085d73cb41" Jan 21 18:23:52 crc kubenswrapper[5099]: I0121 18:23:52.065517 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:23:52 crc kubenswrapper[5099]: I0121 18:23:52.066411 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.140394 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483664-74kt9"] Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.142440 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" containerName="oc" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.142465 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" containerName="oc" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.142637 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" containerName="oc" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.153283 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.154721 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483664-74kt9"] Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.157247 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.157625 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.159619 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.215529 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7kq\" (UniqueName: \"kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq\") pod \"auto-csr-approver-29483664-74kt9\" (UID: \"2142c023-8835-4160-a6f6-fccfb6a68ba7\") " pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.317123 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7kq\" (UniqueName: \"kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq\") pod \"auto-csr-approver-29483664-74kt9\" (UID: \"2142c023-8835-4160-a6f6-fccfb6a68ba7\") " pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.345311 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7kq\" (UniqueName: \"kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq\") pod \"auto-csr-approver-29483664-74kt9\" (UID: \"2142c023-8835-4160-a6f6-fccfb6a68ba7\") " pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.486815 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.695742 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483664-74kt9"] Jan 21 18:24:00 crc kubenswrapper[5099]: I0121 18:24:00.715299 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483664-74kt9" event={"ID":"2142c023-8835-4160-a6f6-fccfb6a68ba7","Type":"ContainerStarted","Data":"fa8d44d160659166e7296c1a290b666a2cc7620fae7c600c2ee1769a3d5d8831"} Jan 21 18:24:02 crc kubenswrapper[5099]: I0121 18:24:02.729261 5099 generic.go:358] "Generic (PLEG): container finished" podID="2142c023-8835-4160-a6f6-fccfb6a68ba7" containerID="bb021283917d73fe471a685d1f7d607443d06de7b8893b1a750aba5095ac3555" exitCode=0 Jan 21 18:24:02 crc kubenswrapper[5099]: I0121 18:24:02.729390 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483664-74kt9" event={"ID":"2142c023-8835-4160-a6f6-fccfb6a68ba7","Type":"ContainerDied","Data":"bb021283917d73fe471a685d1f7d607443d06de7b8893b1a750aba5095ac3555"} Jan 21 18:24:03 crc kubenswrapper[5099]: I0121 18:24:03.975305 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.071616 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm7kq\" (UniqueName: \"kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq\") pod \"2142c023-8835-4160-a6f6-fccfb6a68ba7\" (UID: \"2142c023-8835-4160-a6f6-fccfb6a68ba7\") " Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.081981 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq" (OuterVolumeSpecName: "kube-api-access-bm7kq") pod "2142c023-8835-4160-a6f6-fccfb6a68ba7" (UID: "2142c023-8835-4160-a6f6-fccfb6a68ba7"). InnerVolumeSpecName "kube-api-access-bm7kq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.173790 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bm7kq\" (UniqueName: \"kubernetes.io/projected/2142c023-8835-4160-a6f6-fccfb6a68ba7-kube-api-access-bm7kq\") on node \"crc\" DevicePath \"\"" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.240554 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.242082 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.747431 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483664-74kt9" Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.747409 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483664-74kt9" event={"ID":"2142c023-8835-4160-a6f6-fccfb6a68ba7","Type":"ContainerDied","Data":"fa8d44d160659166e7296c1a290b666a2cc7620fae7c600c2ee1769a3d5d8831"} Jan 21 18:24:04 crc kubenswrapper[5099]: I0121 18:24:04.747572 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa8d44d160659166e7296c1a290b666a2cc7620fae7c600c2ee1769a3d5d8831" Jan 21 18:24:22 crc kubenswrapper[5099]: I0121 18:24:22.064544 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:24:22 crc kubenswrapper[5099]: I0121 18:24:22.065634 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:24:52 crc kubenswrapper[5099]: I0121 18:24:52.065250 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:24:52 crc kubenswrapper[5099]: I0121 18:24:52.066039 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:24:52 crc kubenswrapper[5099]: I0121 18:24:52.066117 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:24:52 crc kubenswrapper[5099]: I0121 18:24:52.066922 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:24:52 crc kubenswrapper[5099]: I0121 18:24:52.067005 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c" gracePeriod=600 Jan 21 18:24:53 crc kubenswrapper[5099]: I0121 18:24:53.098336 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c" exitCode=0 Jan 21 18:24:53 crc kubenswrapper[5099]: I0121 18:24:53.098432 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c"} Jan 21 18:24:53 crc kubenswrapper[5099]: I0121 18:24:53.099184 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574"} Jan 21 18:24:53 crc kubenswrapper[5099]: I0121 18:24:53.099222 5099 scope.go:117] "RemoveContainer" containerID="d9f2116d616e1adef348402f9545fe2386c1505cb1d54b97796467b74fd56b6b" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.129900 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.131293 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="kube-rbac-proxy" containerID="cri-o://e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.131505 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="ovnkube-cluster-manager" containerID="cri-o://de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.328122 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.351244 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-svjkb"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352008 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-controller" containerID="cri-o://3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352099 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="northd" containerID="cri-o://5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352191 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352259 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-node" containerID="cri-o://078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352305 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-acl-logging" containerID="cri-o://be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352361 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="sbdb" containerID="cri-o://715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.352370 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="nbdb" containerID="cri-o://ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.368145 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371156 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="ovnkube-cluster-manager" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371193 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="ovnkube-cluster-manager" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371213 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2142c023-8835-4160-a6f6-fccfb6a68ba7" containerName="oc" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371222 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2142c023-8835-4160-a6f6-fccfb6a68ba7" containerName="oc" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371255 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="kube-rbac-proxy" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371263 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="kube-rbac-proxy" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371445 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="ovnkube-cluster-manager" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371461 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="2142c023-8835-4160-a6f6-fccfb6a68ba7" containerName="oc" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.371476 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerName="kube-rbac-proxy" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.373639 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides\") pod \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.373851 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert\") pod \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.373931 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config\") pod \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.373982 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7xvb\" (UniqueName: \"kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb\") pod \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\" (UID: \"dd3b8a6d-69a8-4079-a747-f379b71bcafe\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.376179 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.378048 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dd3b8a6d-69a8-4079-a747-f379b71bcafe" (UID: "dd3b8a6d-69a8-4079-a747-f379b71bcafe"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.378448 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dd3b8a6d-69a8-4079-a747-f379b71bcafe" (UID: "dd3b8a6d-69a8-4079-a747-f379b71bcafe"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.394512 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "dd3b8a6d-69a8-4079-a747-f379b71bcafe" (UID: "dd3b8a6d-69a8-4079-a747-f379b71bcafe"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396168 5099 generic.go:358] "Generic (PLEG): container finished" podID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerID="de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" exitCode=0 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396263 5099 generic.go:358] "Generic (PLEG): container finished" podID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" containerID="e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" exitCode=0 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396288 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396219 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerDied","Data":"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71"} Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396542 5099 scope.go:117] "RemoveContainer" containerID="de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396613 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerDied","Data":"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c"} Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.396637 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9" event={"ID":"dd3b8a6d-69a8-4079-a747-f379b71bcafe","Type":"ContainerDied","Data":"c8d1a7ce264822b7a9ad6dbef2f4955a6a24275a032d785aa0c77b41a055c3b9"} Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.397344 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb" (OuterVolumeSpecName: "kube-api-access-s7xvb") pod "dd3b8a6d-69a8-4079-a747-f379b71bcafe" (UID: "dd3b8a6d-69a8-4079-a747-f379b71bcafe"). InnerVolumeSpecName "kube-api-access-s7xvb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.419581 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovnkube-controller" containerID="cri-o://b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" gracePeriod=30 Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.455374 5099 scope.go:117] "RemoveContainer" containerID="e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477564 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbvt\" (UniqueName: \"kubernetes.io/projected/c532b732-a208-44d7-803c-103787a3b893-kube-api-access-cfbvt\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477634 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477669 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c532b732-a208-44d7-803c-103787a3b893-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477713 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477775 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477788 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477797 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd3b8a6d-69a8-4079-a747-f379b71bcafe-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.477806 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s7xvb\" (UniqueName: \"kubernetes.io/projected/dd3b8a6d-69a8-4079-a747-f379b71bcafe-kube-api-access-s7xvb\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.509078 5099 scope.go:117] "RemoveContainer" containerID="de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" Jan 21 18:25:31 crc kubenswrapper[5099]: E0121 18:25:31.509717 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71\": container with ID starting with de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71 not found: ID does not exist" containerID="de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.509787 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71"} err="failed to get container status \"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71\": rpc error: code = NotFound desc = could not find container \"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71\": container with ID starting with de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71 not found: ID does not exist" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.509817 5099 scope.go:117] "RemoveContainer" containerID="e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" Jan 21 18:25:31 crc kubenswrapper[5099]: E0121 18:25:31.510342 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c\": container with ID starting with e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c not found: ID does not exist" containerID="e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.510388 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c"} err="failed to get container status \"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c\": rpc error: code = NotFound desc = could not find container \"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c\": container with ID starting with e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c not found: ID does not exist" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.510417 5099 scope.go:117] "RemoveContainer" containerID="de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.510923 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71"} err="failed to get container status \"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71\": rpc error: code = NotFound desc = could not find container \"de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71\": container with ID starting with de7e076cb058e010d8b522ea46baabd209c7c88dc792cced5d4ed291838ddd71 not found: ID does not exist" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.510955 5099 scope.go:117] "RemoveContainer" containerID="e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.511476 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c"} err="failed to get container status \"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c\": rpc error: code = NotFound desc = could not find container \"e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c\": container with ID starting with e48ba06eed8a60bdf2d7557c294e37262e8691610d885782f8df2db2aa267c1c not found: ID does not exist" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.579448 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.579548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c532b732-a208-44d7-803c-103787a3b893-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.579631 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.579681 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbvt\" (UniqueName: \"kubernetes.io/projected/c532b732-a208-44d7-803c-103787a3b893-kube-api-access-cfbvt\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.580252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.580482 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c532b732-a208-44d7-803c-103787a3b893-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.587687 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c532b732-a208-44d7-803c-103787a3b893-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.598997 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbvt\" (UniqueName: \"kubernetes.io/projected/c532b732-a208-44d7-803c-103787a3b893-kube-api-access-cfbvt\") pod \"ovnkube-control-plane-97c9b6c48-sfhv4\" (UID: \"c532b732-a208-44d7-803c-103787a3b893\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.665468 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-svjkb_d7521550-bc40-43eb-bcb0-f563416d810b/ovn-acl-logging/0.log" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.666552 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-svjkb_d7521550-bc40-43eb-bcb0-f563416d810b/ovn-controller/0.log" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.667130 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.725525 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tz5cz"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726539 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726568 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726584 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-node" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726591 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-node" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726601 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="northd" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726608 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="northd" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726616 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kubecfg-setup" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726624 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kubecfg-setup" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726630 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="nbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726635 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="nbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726644 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovnkube-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726650 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovnkube-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726668 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726674 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726683 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-acl-logging" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726688 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-acl-logging" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726711 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="sbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726717 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="sbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726818 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-acl-logging" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726833 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovn-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726840 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726848 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="northd" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726855 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="ovnkube-controller" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726863 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="nbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726871 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="kube-rbac-proxy-node" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.726877 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" containerName="sbdb" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.733231 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.736217 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.741697 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.755865 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-nxrc9"] Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782307 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782639 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782811 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782905 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783008 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783162 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783252 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783339 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783410 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss74r\" (UniqueName: \"kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783476 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783558 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783640 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783751 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783828 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783893 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784944 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785047 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785130 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785249 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert\") pod \"d7521550-bc40-43eb-bcb0-f563416d810b\" (UID: \"d7521550-bc40-43eb-bcb0-f563416d810b\") " Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785467 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-slash\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785543 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldst\" (UniqueName: \"kubernetes.io/projected/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-kube-api-access-gldst\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785972 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-bin\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.786123 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782847 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782883 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782912 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.782945 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.783989 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784010 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket" (OuterVolumeSpecName: "log-socket") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784027 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784043 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784607 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784865 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash" (OuterVolumeSpecName: "host-slash") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.784905 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log" (OuterVolumeSpecName: "node-log") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785813 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.785830 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.786196 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.786220 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.786256 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.786571 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-ovn\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788035 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788205 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-log-socket\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788277 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-node-log\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788341 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-systemd-units\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788426 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-var-lib-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788507 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-netd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788649 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-etc-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788775 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-config\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788852 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-env-overrides\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.788923 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-script-lib\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789027 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-systemd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789103 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-netns\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789190 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovn-node-metrics-cert\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789255 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-kubelet\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789340 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789424 5099 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789484 5099 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789541 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789594 5099 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789650 5099 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789704 5099 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789779 5099 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789838 5099 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789931 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.789992 5099 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790049 5099 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790100 5099 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790148 5099 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790203 5099 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790258 5099 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790314 5099 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.790369 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7521550-bc40-43eb-bcb0-f563416d810b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.825919 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r" (OuterVolumeSpecName: "kube-api-access-ss74r") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "kube-api-access-ss74r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.833474 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.838836 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.842813 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d7521550-bc40-43eb-bcb0-f563416d810b" (UID: "d7521550-bc40-43eb-bcb0-f563416d810b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.891577 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.891645 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-slash\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.891810 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.891922 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gldst\" (UniqueName: \"kubernetes.io/projected/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-kube-api-access-gldst\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.891973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-bin\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892014 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892044 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-ovn\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892064 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892094 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-log-socket\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892112 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-node-log\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892128 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-systemd-units\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892150 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-var-lib-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892171 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-netd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892212 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-etc-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892228 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-config\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892247 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-env-overrides\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892264 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-script-lib\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892295 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-systemd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892314 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-netns\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovn-node-metrics-cert\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892352 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-kubelet\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892392 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ss74r\" (UniqueName: \"kubernetes.io/projected/d7521550-bc40-43eb-bcb0-f563416d810b-kube-api-access-ss74r\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892403 5099 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7521550-bc40-43eb-bcb0-f563416d810b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892413 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7521550-bc40-43eb-bcb0-f563416d810b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892473 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-kubelet\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.892518 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-slash\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893016 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-bin\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893052 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893080 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-ovn\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893105 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-ovn-kubernetes\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893134 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-log-socket\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893164 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-node-log\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893187 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-systemd-units\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893211 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-var-lib-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893235 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-cni-netd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.893262 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-etc-openvswitch\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.894085 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-run-systemd\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.894330 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-host-run-netns\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.894911 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-env-overrides\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.895607 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-script-lib\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.895940 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovnkube-config\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.899294 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-ovn-node-metrics-cert\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.910839 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gldst\" (UniqueName: \"kubernetes.io/projected/ff06f067-64cf-4a9e-ac17-dc00bf627f6d-kube-api-access-gldst\") pod \"ovnkube-node-tz5cz\" (UID: \"ff06f067-64cf-4a9e-ac17-dc00bf627f6d\") " pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:31 crc kubenswrapper[5099]: I0121 18:25:31.922120 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3b8a6d-69a8-4079-a747-f379b71bcafe" path="/var/lib/kubelet/pods/dd3b8a6d-69a8-4079-a747-f379b71bcafe/volumes" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.051027 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:32 crc kubenswrapper[5099]: W0121 18:25:32.080071 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff06f067_64cf_4a9e_ac17_dc00bf627f6d.slice/crio-5cbbb539ac6713bcb6858b31f308112db471f9ab0c4100ed4a7970f40e94bc33 WatchSource:0}: Error finding container 5cbbb539ac6713bcb6858b31f308112db471f9ab0c4100ed4a7970f40e94bc33: Status 404 returned error can't find the container with id 5cbbb539ac6713bcb6858b31f308112db471f9ab0c4100ed4a7970f40e94bc33 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.428894 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-svjkb_d7521550-bc40-43eb-bcb0-f563416d810b/ovn-acl-logging/0.log" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.431526 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-svjkb_d7521550-bc40-43eb-bcb0-f563416d810b/ovn-controller/0.log" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432538 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432648 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432724 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432823 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432893 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.432955 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433024 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" exitCode=143 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433094 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7521550-bc40-43eb-bcb0-f563416d810b" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" exitCode=143 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433179 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433146 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433255 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433240 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433455 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433513 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433531 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433546 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433561 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433575 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433582 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433592 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433602 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433611 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433618 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433624 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433631 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433641 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433648 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433654 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433661 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433672 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433684 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433767 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433778 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433784 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433791 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433797 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433804 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433810 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433816 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433826 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svjkb" event={"ID":"d7521550-bc40-43eb-bcb0-f563416d810b","Type":"ContainerDied","Data":"c326c766374cb3d8f2394017baca2bb66b98a85298a8696e57a7f70208606df7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433838 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433846 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433852 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433858 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433864 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433870 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433876 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433882 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.433888 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.444258 5099 generic.go:358] "Generic (PLEG): container finished" podID="ff06f067-64cf-4a9e-ac17-dc00bf627f6d" containerID="d5d9ca107bd60e299cca2dac8b37ef342da13cebfac6356240c6c64858a6ca2f" exitCode=0 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.444458 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerDied","Data":"d5d9ca107bd60e299cca2dac8b37ef342da13cebfac6356240c6c64858a6ca2f"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.444512 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"5cbbb539ac6713bcb6858b31f308112db471f9ab0c4100ed4a7970f40e94bc33"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.450645 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" event={"ID":"c532b732-a208-44d7-803c-103787a3b893","Type":"ContainerStarted","Data":"14d90f5adbf7b5dec1bebc7b3c64b55374d7b26e24b6eae96f89b43b2e5e741a"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.450912 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" event={"ID":"c532b732-a208-44d7-803c-103787a3b893","Type":"ContainerStarted","Data":"9cda4d775a2dbcedd9134bebe481fc63506080b09cd0ac50ecbbca014bbda0d6"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.450924 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" event={"ID":"c532b732-a208-44d7-803c-103787a3b893","Type":"ContainerStarted","Data":"ecbf398876c25abf53ed133a1348f62ebcbc9640240168fb8c9548de22a7e2ab"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.457922 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.457968 5099 generic.go:358] "Generic (PLEG): container finished" podID="d9b34413-4767-4d59-b13b-8f882453977a" containerID="22c5bf9bc5a8e6069ae71e1c268ae1a485f69de67b5e9606ce7e353dd2c8c6c1" exitCode=2 Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.458085 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6pvpm" event={"ID":"d9b34413-4767-4d59-b13b-8f882453977a","Type":"ContainerDied","Data":"22c5bf9bc5a8e6069ae71e1c268ae1a485f69de67b5e9606ce7e353dd2c8c6c1"} Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.458651 5099 scope.go:117] "RemoveContainer" containerID="22c5bf9bc5a8e6069ae71e1c268ae1a485f69de67b5e9606ce7e353dd2c8c6c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.469000 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.474313 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-svjkb"] Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.480111 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-svjkb"] Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.509778 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.539529 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.566790 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sfhv4" podStartSLOduration=1.566761805 podStartE2EDuration="1.566761805s" podCreationTimestamp="2026-01-21 18:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:25:32.556002805 +0000 UTC m=+689.969965296" watchObservedRunningTime="2026-01-21 18:25:32.566761805 +0000 UTC m=+689.980724276" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.582855 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.609838 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.631346 5099 scope.go:117] "RemoveContainer" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.651752 5099 scope.go:117] "RemoveContainer" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.680201 5099 scope.go:117] "RemoveContainer" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.700022 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.700412 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.700445 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} err="failed to get container status \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.700468 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.700755 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.700781 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} err="failed to get container status \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.700797 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.700988 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701011 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} err="failed to get container status \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701023 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.701236 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701263 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} err="failed to get container status \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701280 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.701604 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701627 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} err="failed to get container status \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.701640 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.702018 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.702080 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} err="failed to get container status \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.702116 5099 scope.go:117] "RemoveContainer" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.702438 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": container with ID starting with be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34 not found: ID does not exist" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.702468 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} err="failed to get container status \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": rpc error: code = NotFound desc = could not find container \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": container with ID starting with be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.702488 5099 scope.go:117] "RemoveContainer" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.702939 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": container with ID starting with 3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059 not found: ID does not exist" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.702981 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} err="failed to get container status \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": rpc error: code = NotFound desc = could not find container \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": container with ID starting with 3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.703003 5099 scope.go:117] "RemoveContainer" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: E0121 18:25:32.703248 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": container with ID starting with 329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7 not found: ID does not exist" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.703271 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} err="failed to get container status \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": rpc error: code = NotFound desc = could not find container \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": container with ID starting with 329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.703289 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.703718 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} err="failed to get container status \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.703790 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704056 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} err="failed to get container status \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704081 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704394 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} err="failed to get container status \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704427 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704772 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} err="failed to get container status \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.704806 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.706175 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} err="failed to get container status \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.706236 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.707543 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} err="failed to get container status \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.707571 5099 scope.go:117] "RemoveContainer" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.707903 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} err="failed to get container status \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": rpc error: code = NotFound desc = could not find container \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": container with ID starting with be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.707932 5099 scope.go:117] "RemoveContainer" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708295 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} err="failed to get container status \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": rpc error: code = NotFound desc = could not find container \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": container with ID starting with 3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708325 5099 scope.go:117] "RemoveContainer" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708565 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} err="failed to get container status \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": rpc error: code = NotFound desc = could not find container \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": container with ID starting with 329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708586 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708825 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} err="failed to get container status \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.708846 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709161 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} err="failed to get container status \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709246 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709516 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} err="failed to get container status \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709545 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709833 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} err="failed to get container status \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.709856 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710091 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} err="failed to get container status \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710114 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710297 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} err="failed to get container status \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710315 5099 scope.go:117] "RemoveContainer" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710545 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} err="failed to get container status \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": rpc error: code = NotFound desc = could not find container \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": container with ID starting with be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.710567 5099 scope.go:117] "RemoveContainer" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711109 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} err="failed to get container status \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": rpc error: code = NotFound desc = could not find container \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": container with ID starting with 3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711133 5099 scope.go:117] "RemoveContainer" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711444 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} err="failed to get container status \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": rpc error: code = NotFound desc = could not find container \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": container with ID starting with 329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711473 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711685 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} err="failed to get container status \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711710 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.711986 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} err="failed to get container status \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712008 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712224 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} err="failed to get container status \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712258 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712515 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} err="failed to get container status \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712535 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712772 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} err="failed to get container status \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.712795 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713060 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} err="failed to get container status \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713091 5099 scope.go:117] "RemoveContainer" containerID="be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713363 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34"} err="failed to get container status \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": rpc error: code = NotFound desc = could not find container \"be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34\": container with ID starting with be8ed0e4f720658f71be2dd52370e9e4b72182cfaabe7e44bc52ace65660ab34 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713421 5099 scope.go:117] "RemoveContainer" containerID="3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713767 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059"} err="failed to get container status \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": rpc error: code = NotFound desc = could not find container \"3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059\": container with ID starting with 3f3a6f8ec66e21454a7efd27b778af42d65195909a54ee49adcee8f6711d1059 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.713792 5099 scope.go:117] "RemoveContainer" containerID="329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714006 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7"} err="failed to get container status \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": rpc error: code = NotFound desc = could not find container \"329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7\": container with ID starting with 329659e4786b839f2082f85117e8f694a6aa7d675d581f47afed9010eae8cdc7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714040 5099 scope.go:117] "RemoveContainer" containerID="b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714402 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7"} err="failed to get container status \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": rpc error: code = NotFound desc = could not find container \"b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7\": container with ID starting with b9195a13c90ca87a12f76ad8f66c088f4af22fe03f2960a4bab916c4d5d5b2d7 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714444 5099 scope.go:117] "RemoveContainer" containerID="715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714817 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b"} err="failed to get container status \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": rpc error: code = NotFound desc = could not find container \"715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b\": container with ID starting with 715177bbd2f8050a52d5a8fac8a582eb809c747ef416e4a4da743949f600357b not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.714843 5099 scope.go:117] "RemoveContainer" containerID="ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715120 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454"} err="failed to get container status \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": rpc error: code = NotFound desc = could not find container \"ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454\": container with ID starting with ec56a2db33e5899e939975d066614498ddc50ad9fd51d101c00aeb90cb8a4454 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715141 5099 scope.go:117] "RemoveContainer" containerID="5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715520 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3"} err="failed to get container status \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": rpc error: code = NotFound desc = could not find container \"5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3\": container with ID starting with 5eac65293831383e6044c55378d17fa35899f3a75c86c3ef4ead445b6e94bbf3 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715538 5099 scope.go:117] "RemoveContainer" containerID="15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715857 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0"} err="failed to get container status \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": rpc error: code = NotFound desc = could not find container \"15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0\": container with ID starting with 15e8f6887fa5951d2123b3e4d28614bc9f8a4dd5867c331733faa3802c626ab0 not found: ID does not exist" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.715906 5099 scope.go:117] "RemoveContainer" containerID="078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1" Jan 21 18:25:32 crc kubenswrapper[5099]: I0121 18:25:32.716339 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1"} err="failed to get container status \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": rpc error: code = NotFound desc = could not find container \"078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1\": container with ID starting with 078229736ed14d660e3b4e681785719d4a6d65296cfeb9fb3bb80a4e3c6d88c1 not found: ID does not exist" Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.466449 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.466627 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6pvpm" event={"ID":"d9b34413-4767-4d59-b13b-8f882453977a","Type":"ContainerStarted","Data":"5b0f8753069b3b0e7ec6ca696833f5488534498c184c7e3ce048e35cd1545013"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472570 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"45162e40b13111d009f909f13e5a83d73cefa465f63eb64c589a5a7addd38701"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"48ee6bf86200c4cc71dfb7d31e2d6189328836fea82446462012d553f82e707b"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472622 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"6e807a5b8aad5d242ec1be5bfb4d0c247bbb5d3f2dcb18a013f5d679a956ba0d"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472632 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"b685e5ea55d91abfee7694200e5e229bc3cb819fcdd3027284943e4d2cf0dde2"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472644 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"7270fcb0d0be6924bc03e8ae3cfc737bd9d730c96074ed7d30d83223bb9dc507"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.472655 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"12359563d76b8a1f827626b96c6eaaf3fbe107ea609e483cb7e37678a6515856"} Jan 21 18:25:33 crc kubenswrapper[5099]: I0121 18:25:33.928507 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7521550-bc40-43eb-bcb0-f563416d810b" path="/var/lib/kubelet/pods/d7521550-bc40-43eb-bcb0-f563416d810b/volumes" Jan 21 18:25:36 crc kubenswrapper[5099]: I0121 18:25:36.502255 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"b500c7bc505691ed92d8d8781131df91bf9c8137926d74e9fb08eb874479dce6"} Jan 21 18:25:38 crc kubenswrapper[5099]: I0121 18:25:38.523404 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" event={"ID":"ff06f067-64cf-4a9e-ac17-dc00bf627f6d","Type":"ContainerStarted","Data":"a8f2c2eae0d02bde222026f4c7a85c469738c5e9e6c56fb1a6a75c31b28bf307"} Jan 21 18:25:38 crc kubenswrapper[5099]: I0121 18:25:38.524161 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:38 crc kubenswrapper[5099]: I0121 18:25:38.553401 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" podStartSLOduration=7.553383637 podStartE2EDuration="7.553383637s" podCreationTimestamp="2026-01-21 18:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:25:38.550863135 +0000 UTC m=+695.964825606" watchObservedRunningTime="2026-01-21 18:25:38.553383637 +0000 UTC m=+695.967346098" Jan 21 18:25:38 crc kubenswrapper[5099]: I0121 18:25:38.563400 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:39 crc kubenswrapper[5099]: I0121 18:25:39.530531 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:39 crc kubenswrapper[5099]: I0121 18:25:39.531006 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:25:39 crc kubenswrapper[5099]: I0121 18:25:39.559549 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.135721 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483666-mctxk"] Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.141689 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.144899 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.145035 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.145572 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483666-mctxk"] Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.147912 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.223662 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpm5\" (UniqueName: \"kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5\") pod \"auto-csr-approver-29483666-mctxk\" (UID: \"349fa1e0-3431-4ef2-8bf3-77c052a7e479\") " pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.325607 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpm5\" (UniqueName: \"kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5\") pod \"auto-csr-approver-29483666-mctxk\" (UID: \"349fa1e0-3431-4ef2-8bf3-77c052a7e479\") " pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.351200 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpm5\" (UniqueName: \"kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5\") pod \"auto-csr-approver-29483666-mctxk\" (UID: \"349fa1e0-3431-4ef2-8bf3-77c052a7e479\") " pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.465545 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.697652 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483666-mctxk"] Jan 21 18:26:00 crc kubenswrapper[5099]: I0121 18:26:00.712344 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483666-mctxk" event={"ID":"349fa1e0-3431-4ef2-8bf3-77c052a7e479","Type":"ContainerStarted","Data":"913a15a5bb7dd321502c6c13d5490fe47238247afee9f8c43347974070defbfa"} Jan 21 18:26:02 crc kubenswrapper[5099]: I0121 18:26:02.730793 5099 generic.go:358] "Generic (PLEG): container finished" podID="349fa1e0-3431-4ef2-8bf3-77c052a7e479" containerID="5505ca5afa481ffeacca2977f07078fb4d81fcf3b74a0ea1fc655414cc6a80e3" exitCode=0 Jan 21 18:26:02 crc kubenswrapper[5099]: I0121 18:26:02.730894 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483666-mctxk" event={"ID":"349fa1e0-3431-4ef2-8bf3-77c052a7e479","Type":"ContainerDied","Data":"5505ca5afa481ffeacca2977f07078fb4d81fcf3b74a0ea1fc655414cc6a80e3"} Jan 21 18:26:03 crc kubenswrapper[5099]: I0121 18:26:03.972048 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.079504 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpm5\" (UniqueName: \"kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5\") pod \"349fa1e0-3431-4ef2-8bf3-77c052a7e479\" (UID: \"349fa1e0-3431-4ef2-8bf3-77c052a7e479\") " Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.087112 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5" (OuterVolumeSpecName: "kube-api-access-pmpm5") pod "349fa1e0-3431-4ef2-8bf3-77c052a7e479" (UID: "349fa1e0-3431-4ef2-8bf3-77c052a7e479"). InnerVolumeSpecName "kube-api-access-pmpm5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.181503 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmpm5\" (UniqueName: \"kubernetes.io/projected/349fa1e0-3431-4ef2-8bf3-77c052a7e479-kube-api-access-pmpm5\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.744255 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483666-mctxk" event={"ID":"349fa1e0-3431-4ef2-8bf3-77c052a7e479","Type":"ContainerDied","Data":"913a15a5bb7dd321502c6c13d5490fe47238247afee9f8c43347974070defbfa"} Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.744308 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913a15a5bb7dd321502c6c13d5490fe47238247afee9f8c43347974070defbfa" Jan 21 18:26:04 crc kubenswrapper[5099]: I0121 18:26:04.744323 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483666-mctxk" Jan 21 18:26:11 crc kubenswrapper[5099]: I0121 18:26:11.567282 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tz5cz" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.326425 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.327697 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="349fa1e0-3431-4ef2-8bf3-77c052a7e479" containerName="oc" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.327712 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="349fa1e0-3431-4ef2-8bf3-77c052a7e479" containerName="oc" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.327837 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="349fa1e0-3431-4ef2-8bf3-77c052a7e479" containerName="oc" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.579076 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.579375 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.629665 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.629775 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966sf\" (UniqueName: \"kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.629873 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.731191 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.731254 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-966sf\" (UniqueName: \"kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.731500 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.731901 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.732086 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.768887 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-966sf\" (UniqueName: \"kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf\") pod \"community-operators-9tm8h\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:21 crc kubenswrapper[5099]: I0121 18:26:21.901007 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:22 crc kubenswrapper[5099]: I0121 18:26:22.157209 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:22 crc kubenswrapper[5099]: I0121 18:26:22.861636 5099 generic.go:358] "Generic (PLEG): container finished" podID="1e219d57-150b-4f85-ac81-6c1b66794306" containerID="a21b58bebedbd28365d9252e2e1814b9407e2701cadd8e99f93ac06eecf06da4" exitCode=0 Jan 21 18:26:22 crc kubenswrapper[5099]: I0121 18:26:22.861754 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerDied","Data":"a21b58bebedbd28365d9252e2e1814b9407e2701cadd8e99f93ac06eecf06da4"} Jan 21 18:26:22 crc kubenswrapper[5099]: I0121 18:26:22.862103 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerStarted","Data":"bd1d68db75d762812ae3e3a2844b421ec9f2e303f0b39a3cc5279a6982cda0e3"} Jan 21 18:26:23 crc kubenswrapper[5099]: I0121 18:26:23.871480 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerStarted","Data":"6f8b05560b18b88a74edf04713b32054882b876dcc579e227241b3bcf10055fb"} Jan 21 18:26:24 crc kubenswrapper[5099]: I0121 18:26:24.881151 5099 generic.go:358] "Generic (PLEG): container finished" podID="1e219d57-150b-4f85-ac81-6c1b66794306" containerID="6f8b05560b18b88a74edf04713b32054882b876dcc579e227241b3bcf10055fb" exitCode=0 Jan 21 18:26:24 crc kubenswrapper[5099]: I0121 18:26:24.881246 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerDied","Data":"6f8b05560b18b88a74edf04713b32054882b876dcc579e227241b3bcf10055fb"} Jan 21 18:26:24 crc kubenswrapper[5099]: I0121 18:26:24.881756 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerStarted","Data":"43975d4e9a3879cddd66909d4009a9a2ce6cf974f4fbdde4b662916cce49641d"} Jan 21 18:26:24 crc kubenswrapper[5099]: I0121 18:26:24.908059 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9tm8h" podStartSLOduration=3.314012326 podStartE2EDuration="3.908044799s" podCreationTimestamp="2026-01-21 18:26:21 +0000 UTC" firstStartedPulling="2026-01-21 18:26:22.862804452 +0000 UTC m=+740.276766933" lastFinishedPulling="2026-01-21 18:26:23.456836945 +0000 UTC m=+740.870799406" observedRunningTime="2026-01-21 18:26:24.904245673 +0000 UTC m=+742.318208134" watchObservedRunningTime="2026-01-21 18:26:24.908044799 +0000 UTC m=+742.322007260" Jan 21 18:26:31 crc kubenswrapper[5099]: I0121 18:26:31.901392 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:31 crc kubenswrapper[5099]: I0121 18:26:31.902110 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:31 crc kubenswrapper[5099]: I0121 18:26:31.946163 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:31 crc kubenswrapper[5099]: I0121 18:26:31.983192 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:32 crc kubenswrapper[5099]: I0121 18:26:32.182480 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:33 crc kubenswrapper[5099]: I0121 18:26:33.940969 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9tm8h" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="registry-server" containerID="cri-o://43975d4e9a3879cddd66909d4009a9a2ce6cf974f4fbdde4b662916cce49641d" gracePeriod=2 Jan 21 18:26:34 crc kubenswrapper[5099]: I0121 18:26:34.949634 5099 generic.go:358] "Generic (PLEG): container finished" podID="1e219d57-150b-4f85-ac81-6c1b66794306" containerID="43975d4e9a3879cddd66909d4009a9a2ce6cf974f4fbdde4b662916cce49641d" exitCode=0 Jan 21 18:26:34 crc kubenswrapper[5099]: I0121 18:26:34.949724 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerDied","Data":"43975d4e9a3879cddd66909d4009a9a2ce6cf974f4fbdde4b662916cce49641d"} Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.411802 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.546642 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities\") pod \"1e219d57-150b-4f85-ac81-6c1b66794306\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.546773 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content\") pod \"1e219d57-150b-4f85-ac81-6c1b66794306\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.546816 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966sf\" (UniqueName: \"kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf\") pod \"1e219d57-150b-4f85-ac81-6c1b66794306\" (UID: \"1e219d57-150b-4f85-ac81-6c1b66794306\") " Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.547664 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities" (OuterVolumeSpecName: "utilities") pod "1e219d57-150b-4f85-ac81-6c1b66794306" (UID: "1e219d57-150b-4f85-ac81-6c1b66794306"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.557172 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf" (OuterVolumeSpecName: "kube-api-access-966sf") pod "1e219d57-150b-4f85-ac81-6c1b66794306" (UID: "1e219d57-150b-4f85-ac81-6c1b66794306"). InnerVolumeSpecName "kube-api-access-966sf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.594549 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e219d57-150b-4f85-ac81-6c1b66794306" (UID: "1e219d57-150b-4f85-ac81-6c1b66794306"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.648942 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.649494 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-966sf\" (UniqueName: \"kubernetes.io/projected/1e219d57-150b-4f85-ac81-6c1b66794306-kube-api-access-966sf\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.649509 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e219d57-150b-4f85-ac81-6c1b66794306-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.959632 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tm8h" event={"ID":"1e219d57-150b-4f85-ac81-6c1b66794306","Type":"ContainerDied","Data":"bd1d68db75d762812ae3e3a2844b421ec9f2e303f0b39a3cc5279a6982cda0e3"} Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.960102 5099 scope.go:117] "RemoveContainer" containerID="43975d4e9a3879cddd66909d4009a9a2ce6cf974f4fbdde4b662916cce49641d" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.960361 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tm8h" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.983031 5099 scope.go:117] "RemoveContainer" containerID="6f8b05560b18b88a74edf04713b32054882b876dcc579e227241b3bcf10055fb" Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.985993 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:35 crc kubenswrapper[5099]: I0121 18:26:35.992305 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9tm8h"] Jan 21 18:26:36 crc kubenswrapper[5099]: I0121 18:26:36.007900 5099 scope.go:117] "RemoveContainer" containerID="a21b58bebedbd28365d9252e2e1814b9407e2701cadd8e99f93ac06eecf06da4" Jan 21 18:26:37 crc kubenswrapper[5099]: I0121 18:26:37.922325 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" path="/var/lib/kubelet/pods/1e219d57-150b-4f85-ac81-6c1b66794306/volumes" Jan 21 18:26:46 crc kubenswrapper[5099]: I0121 18:26:46.583388 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:26:46 crc kubenswrapper[5099]: I0121 18:26:46.584028 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pcdgg" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="registry-server" containerID="cri-o://58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a" gracePeriod=30 Jan 21 18:26:46 crc kubenswrapper[5099]: I0121 18:26:46.957210 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.036899 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content\") pod \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.037058 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities\") pod \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.037216 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5hp9\" (UniqueName: \"kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9\") pod \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\" (UID: \"d4001d3a-1cc5-473a-a83f-7ae904042d7d\") " Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.040357 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities" (OuterVolumeSpecName: "utilities") pod "d4001d3a-1cc5-473a-a83f-7ae904042d7d" (UID: "d4001d3a-1cc5-473a-a83f-7ae904042d7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.043827 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9" (OuterVolumeSpecName: "kube-api-access-q5hp9") pod "d4001d3a-1cc5-473a-a83f-7ae904042d7d" (UID: "d4001d3a-1cc5-473a-a83f-7ae904042d7d"). InnerVolumeSpecName "kube-api-access-q5hp9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.046859 5099 generic.go:358] "Generic (PLEG): container finished" podID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerID="58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a" exitCode=0 Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.046954 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerDied","Data":"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a"} Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.047008 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcdgg" event={"ID":"d4001d3a-1cc5-473a-a83f-7ae904042d7d","Type":"ContainerDied","Data":"2e2e61a4ed6c15cac010771f879aaaa7e71631fa22042d06eeed7e6e70fd344e"} Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.047002 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcdgg" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.047029 5099 scope.go:117] "RemoveContainer" containerID="58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.057422 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4001d3a-1cc5-473a-a83f-7ae904042d7d" (UID: "d4001d3a-1cc5-473a-a83f-7ae904042d7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.069441 5099 scope.go:117] "RemoveContainer" containerID="d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.085707 5099 scope.go:117] "RemoveContainer" containerID="872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.102334 5099 scope.go:117] "RemoveContainer" containerID="58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a" Jan 21 18:26:47 crc kubenswrapper[5099]: E0121 18:26:47.102960 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a\": container with ID starting with 58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a not found: ID does not exist" containerID="58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.103068 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a"} err="failed to get container status \"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a\": rpc error: code = NotFound desc = could not find container \"58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a\": container with ID starting with 58924e9a3154561dd0ee65cad913b41fe9fb901612eab0028865529c88012b0a not found: ID does not exist" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.103125 5099 scope.go:117] "RemoveContainer" containerID="d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0" Jan 21 18:26:47 crc kubenswrapper[5099]: E0121 18:26:47.103629 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0\": container with ID starting with d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0 not found: ID does not exist" containerID="d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.103696 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0"} err="failed to get container status \"d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0\": rpc error: code = NotFound desc = could not find container \"d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0\": container with ID starting with d001b5ca4eab330dc6c71994a34d05a31174c4d33477e07e67757e12fd5095b0 not found: ID does not exist" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.103749 5099 scope.go:117] "RemoveContainer" containerID="872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff" Jan 21 18:26:47 crc kubenswrapper[5099]: E0121 18:26:47.104279 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff\": container with ID starting with 872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff not found: ID does not exist" containerID="872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.104339 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff"} err="failed to get container status \"872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff\": rpc error: code = NotFound desc = could not find container \"872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff\": container with ID starting with 872064f2089b4ea6f49c22a695429e8fb42f27bb3fdcdd0fef49988c2ce8a3ff not found: ID does not exist" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.138594 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.138933 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5hp9\" (UniqueName: \"kubernetes.io/projected/d4001d3a-1cc5-473a-a83f-7ae904042d7d-kube-api-access-q5hp9\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.138943 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4001d3a-1cc5-473a-a83f-7ae904042d7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.386236 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.389915 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcdgg"] Jan 21 18:26:47 crc kubenswrapper[5099]: I0121 18:26:47.922910 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" path="/var/lib/kubelet/pods/d4001d3a-1cc5-473a-a83f-7ae904042d7d/volumes" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.941558 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7"] Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942873 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942892 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942904 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="extract-utilities" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942910 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="extract-utilities" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942923 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="extract-utilities" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942930 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="extract-utilities" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942941 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="extract-content" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942948 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="extract-content" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942962 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="extract-content" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.942970 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="extract-content" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.943000 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.943009 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.943134 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e219d57-150b-4f85-ac81-6c1b66794306" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.943155 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4001d3a-1cc5-473a-a83f-7ae904042d7d" containerName="registry-server" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.964145 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7"] Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.964401 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:50 crc kubenswrapper[5099]: I0121 18:26:50.971512 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.114669 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.114782 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2mf\" (UniqueName: \"kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.114888 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.216656 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.215995 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.216776 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2mf\" (UniqueName: \"kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.216876 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.217137 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.239530 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2mf\" (UniqueName: \"kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.289093 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:51 crc kubenswrapper[5099]: I0121 18:26:51.517200 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7"] Jan 21 18:26:52 crc kubenswrapper[5099]: I0121 18:26:52.064568 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:26:52 crc kubenswrapper[5099]: I0121 18:26:52.064712 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:26:52 crc kubenswrapper[5099]: I0121 18:26:52.082853 5099 generic.go:358] "Generic (PLEG): container finished" podID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerID="df0b0fdb680f36937112adf0fa1c8bc9107ebc828aea6f6afe71343c58b112f7" exitCode=0 Jan 21 18:26:52 crc kubenswrapper[5099]: I0121 18:26:52.082930 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" event={"ID":"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44","Type":"ContainerDied","Data":"df0b0fdb680f36937112adf0fa1c8bc9107ebc828aea6f6afe71343c58b112f7"} Jan 21 18:26:52 crc kubenswrapper[5099]: I0121 18:26:52.082999 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" event={"ID":"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44","Type":"ContainerStarted","Data":"9cee9d668e2a485b503eab4a0a62722e3a6f183bcd4250d9650540001b93eefd"} Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.898402 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.907340 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.923003 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.970938 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.970996 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwf5z\" (UniqueName: \"kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:53 crc kubenswrapper[5099]: I0121 18:26:53.971085 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.072582 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.072657 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wwf5z\" (UniqueName: \"kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.072774 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.073321 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.073396 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.105133 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwf5z\" (UniqueName: \"kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z\") pod \"redhat-operators-5c9sl\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.109347 5099 generic.go:358] "Generic (PLEG): container finished" podID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerID="ae8b3703fff1d1151f74d0f5f7d4a0002d8c1f0c5a0ab3d2f995a8c80a6925f6" exitCode=0 Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.109711 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" event={"ID":"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44","Type":"ContainerDied","Data":"ae8b3703fff1d1151f74d0f5f7d4a0002d8c1f0c5a0ab3d2f995a8c80a6925f6"} Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.276549 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:26:54 crc kubenswrapper[5099]: I0121 18:26:54.528704 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:26:54 crc kubenswrapper[5099]: W0121 18:26:54.542722 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb80b964_3167_4125_8e33_214730cc9bdf.slice/crio-3b4605971d126f394c9cbea8d797758dd887fef954910605139b9d8a01d9be25 WatchSource:0}: Error finding container 3b4605971d126f394c9cbea8d797758dd887fef954910605139b9d8a01d9be25: Status 404 returned error can't find the container with id 3b4605971d126f394c9cbea8d797758dd887fef954910605139b9d8a01d9be25 Jan 21 18:26:55 crc kubenswrapper[5099]: I0121 18:26:55.119804 5099 generic.go:358] "Generic (PLEG): container finished" podID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerID="080a18d05873d9d6675e5cc1eb4a5a02f9d392606f874004266fe8ef9dbf9edf" exitCode=0 Jan 21 18:26:55 crc kubenswrapper[5099]: I0121 18:26:55.119935 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" event={"ID":"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44","Type":"ContainerDied","Data":"080a18d05873d9d6675e5cc1eb4a5a02f9d392606f874004266fe8ef9dbf9edf"} Jan 21 18:26:55 crc kubenswrapper[5099]: I0121 18:26:55.122345 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb80b964-3167-4125-8e33-214730cc9bdf" containerID="47c99bfc01fbc29372214093249bd1b2b0b6a36d6093663637ce7d365d7f5a26" exitCode=0 Jan 21 18:26:55 crc kubenswrapper[5099]: I0121 18:26:55.122451 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerDied","Data":"47c99bfc01fbc29372214093249bd1b2b0b6a36d6093663637ce7d365d7f5a26"} Jan 21 18:26:55 crc kubenswrapper[5099]: I0121 18:26:55.122502 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerStarted","Data":"3b4605971d126f394c9cbea8d797758dd887fef954910605139b9d8a01d9be25"} Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.132425 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerStarted","Data":"a10a7ccfa31662d47a8afd228f3278527632b506c2d7be09f3aceb6f0b344a2e"} Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.421232 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.511456 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle\") pod \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.511797 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj2mf\" (UniqueName: \"kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf\") pod \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.511851 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util\") pod \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\" (UID: \"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44\") " Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.514048 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle" (OuterVolumeSpecName: "bundle") pod "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" (UID: "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.524354 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util" (OuterVolumeSpecName: "util") pod "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" (UID: "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.613454 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-util\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.613508 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.805944 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf" (OuterVolumeSpecName: "kube-api-access-qj2mf") pod "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" (UID: "d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44"). InnerVolumeSpecName "kube-api-access-qj2mf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:26:56 crc kubenswrapper[5099]: I0121 18:26:56.816650 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj2mf\" (UniqueName: \"kubernetes.io/projected/d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44-kube-api-access-qj2mf\") on node \"crc\" DevicePath \"\"" Jan 21 18:26:57 crc kubenswrapper[5099]: I0121 18:26:57.250632 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" event={"ID":"d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44","Type":"ContainerDied","Data":"9cee9d668e2a485b503eab4a0a62722e3a6f183bcd4250d9650540001b93eefd"} Jan 21 18:26:57 crc kubenswrapper[5099]: I0121 18:26:57.250707 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cee9d668e2a485b503eab4a0a62722e3a6f183bcd4250d9650540001b93eefd" Jan 21 18:26:57 crc kubenswrapper[5099]: I0121 18:26:57.250858 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7" Jan 21 18:26:59 crc kubenswrapper[5099]: I0121 18:26:59.268524 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb80b964-3167-4125-8e33-214730cc9bdf" containerID="a10a7ccfa31662d47a8afd228f3278527632b506c2d7be09f3aceb6f0b344a2e" exitCode=0 Jan 21 18:26:59 crc kubenswrapper[5099]: I0121 18:26:59.268577 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerDied","Data":"a10a7ccfa31662d47a8afd228f3278527632b506c2d7be09f3aceb6f0b344a2e"} Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.149872 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f"] Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151076 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="extract" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151094 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="extract" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151109 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="pull" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151114 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="pull" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151143 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="util" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151149 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="util" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.151254 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44" containerName="extract" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.158342 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.164322 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.213163 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f"] Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.287864 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerStarted","Data":"64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31"} Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.313581 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.313679 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.313797 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6dl\" (UniqueName: \"kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.321301 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5c9sl" podStartSLOduration=7.55950087 podStartE2EDuration="8.321269055s" podCreationTimestamp="2026-01-21 18:26:53 +0000 UTC" firstStartedPulling="2026-01-21 18:26:55.123489963 +0000 UTC m=+772.537452424" lastFinishedPulling="2026-01-21 18:26:55.885258158 +0000 UTC m=+773.299220609" observedRunningTime="2026-01-21 18:27:01.315075562 +0000 UTC m=+778.729038223" watchObservedRunningTime="2026-01-21 18:27:01.321269055 +0000 UTC m=+778.735231516" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.414945 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.415156 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.415246 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6dl\" (UniqueName: \"kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.416217 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.417474 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.449252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6dl\" (UniqueName: \"kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.530102 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.857666 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f"] Jan 21 18:27:01 crc kubenswrapper[5099]: I0121 18:27:01.944695 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7"] Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.578431 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7"] Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.578961 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerStarted","Data":"c3f332f3563768dd3da5dfb00e963f4268908b6b332d8c1f54e4aacfb2aa2efe"} Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.578981 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.738163 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.738540 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx654\" (UniqueName: \"kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.738882 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.841177 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.841269 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.841332 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sx654\" (UniqueName: \"kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.842235 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.842309 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.865143 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx654\" (UniqueName: \"kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:02 crc kubenswrapper[5099]: I0121 18:27:02.895781 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:03 crc kubenswrapper[5099]: I0121 18:27:03.428746 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerID="0fe9b39607e104140ee380ea6a11a243ca5c7ca8d256cfefa2157f19a01a91af" exitCode=0 Jan 21 18:27:03 crc kubenswrapper[5099]: I0121 18:27:03.429517 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerDied","Data":"0fe9b39607e104140ee380ea6a11a243ca5c7ca8d256cfefa2157f19a01a91af"} Jan 21 18:27:03 crc kubenswrapper[5099]: I0121 18:27:03.515640 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7"] Jan 21 18:27:04 crc kubenswrapper[5099]: I0121 18:27:04.277521 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:04 crc kubenswrapper[5099]: I0121 18:27:04.279433 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:04 crc kubenswrapper[5099]: I0121 18:27:04.469922 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerStarted","Data":"c0a700329e590601e715347436e15f5020a1ff35c1c30d9084ef30f981f56a07"} Jan 21 18:27:04 crc kubenswrapper[5099]: I0121 18:27:04.469975 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerStarted","Data":"ea41a2a389c42f6ff4a0527241421fc832aa2e654cc2b30310433abbc05785d8"} Jan 21 18:27:05 crc kubenswrapper[5099]: I0121 18:27:05.564959 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerID="c0a700329e590601e715347436e15f5020a1ff35c1c30d9084ef30f981f56a07" exitCode=0 Jan 21 18:27:05 crc kubenswrapper[5099]: I0121 18:27:05.567681 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerDied","Data":"c0a700329e590601e715347436e15f5020a1ff35c1c30d9084ef30f981f56a07"} Jan 21 18:27:05 crc kubenswrapper[5099]: I0121 18:27:05.678347 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5c9sl" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" probeResult="failure" output=< Jan 21 18:27:05 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Jan 21 18:27:05 crc kubenswrapper[5099]: > Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.409913 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.417813 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.482450 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.540405 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c695x\" (UniqueName: \"kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.540492 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.540723 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.643348 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.643483 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c695x\" (UniqueName: \"kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.643570 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.644343 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.644603 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.717710 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerStarted","Data":"b88b979d92c3fdcd9e514e035d859f5d771b393516491d0ee8bd4ef631032d50"} Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.735954 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c695x\" (UniqueName: \"kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x\") pod \"certified-operators-f58tc\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:06 crc kubenswrapper[5099]: I0121 18:27:06.802833 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.164219 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt"] Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.204151 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.252585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.252709 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.252789 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspf7\" (UniqueName: \"kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.354073 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.354151 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cspf7\" (UniqueName: \"kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.354201 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.354918 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.355175 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.683964 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspf7\" (UniqueName: \"kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.733238 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerStarted","Data":"943e5afdedf9490a384e25aabba97080804e995239c9d4dcf19124e51f5c6af8"} Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.770873 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 18:27:07 crc kubenswrapper[5099]: W0121 18:27:07.777669 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1f0429_8f30_4646_aa1b_9913eb49ebfe.slice/crio-22f0ac5d4ad56994430cc2ae3c9b5a4e789c92e34f04bf75a51aef9c187b7828 WatchSource:0}: Error finding container 22f0ac5d4ad56994430cc2ae3c9b5a4e789c92e34f04bf75a51aef9c187b7828: Status 404 returned error can't find the container with id 22f0ac5d4ad56994430cc2ae3c9b5a4e789c92e34f04bf75a51aef9c187b7828 Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.831460 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt"] Jan 21 18:27:07 crc kubenswrapper[5099]: I0121 18:27:07.854591 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:08 crc kubenswrapper[5099]: I0121 18:27:08.625147 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt"] Jan 21 18:27:08 crc kubenswrapper[5099]: I0121 18:27:08.751727 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerStarted","Data":"ab5d5e665a0a27e176ce664176397d551799b73f915d5646e5aa4cff6218f999"} Jan 21 18:27:08 crc kubenswrapper[5099]: I0121 18:27:08.753760 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerStarted","Data":"22f0ac5d4ad56994430cc2ae3c9b5a4e789c92e34f04bf75a51aef9c187b7828"} Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.762828 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerID="943e5afdedf9490a384e25aabba97080804e995239c9d4dcf19124e51f5c6af8" exitCode=0 Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.762947 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerDied","Data":"943e5afdedf9490a384e25aabba97080804e995239c9d4dcf19124e51f5c6af8"} Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.767642 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerStarted","Data":"e285c6df6f452d37f3cb6b88ccf28886f07a4179f322ed5a9472c9eac5352c23"} Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.776486 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerID="b88b979d92c3fdcd9e514e035d859f5d771b393516491d0ee8bd4ef631032d50" exitCode=0 Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.776788 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerDied","Data":"b88b979d92c3fdcd9e514e035d859f5d771b393516491d0ee8bd4ef631032d50"} Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.783341 5099 generic.go:358] "Generic (PLEG): container finished" podID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerID="ebec7c7c6354aea243b57bacff79f3de789e8b324329866809660649d922e444" exitCode=0 Jan 21 18:27:09 crc kubenswrapper[5099]: I0121 18:27:09.783460 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerDied","Data":"ebec7c7c6354aea243b57bacff79f3de789e8b324329866809660649d922e444"} Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.803049 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerStarted","Data":"ebb1152f074e976590046f030acceaf03c3c8311275f167b43f18aeea72aa328"} Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.811346 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerStarted","Data":"c3ff1fc5360b87ee1bd6177d6134b4ae006dc8e40e4911c684c083da33870229"} Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.820634 5099 generic.go:358] "Generic (PLEG): container finished" podID="730e984f-9245-4a98-aefb-dda6686307f1" containerID="e285c6df6f452d37f3cb6b88ccf28886f07a4179f322ed5a9472c9eac5352c23" exitCode=0 Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.820707 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerDied","Data":"e285c6df6f452d37f3cb6b88ccf28886f07a4179f322ed5a9472c9eac5352c23"} Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.907058 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" podStartSLOduration=7.373356538 podStartE2EDuration="9.907035935s" podCreationTimestamp="2026-01-21 18:27:01 +0000 UTC" firstStartedPulling="2026-01-21 18:27:03.433086333 +0000 UTC m=+780.847048794" lastFinishedPulling="2026-01-21 18:27:05.96676572 +0000 UTC m=+783.380728191" observedRunningTime="2026-01-21 18:27:10.900983684 +0000 UTC m=+788.314946165" watchObservedRunningTime="2026-01-21 18:27:10.907035935 +0000 UTC m=+788.320998396" Jan 21 18:27:10 crc kubenswrapper[5099]: I0121 18:27:10.958777 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" podStartSLOduration=8.714690863 podStartE2EDuration="9.95875007s" podCreationTimestamp="2026-01-21 18:27:01 +0000 UTC" firstStartedPulling="2026-01-21 18:27:05.572474014 +0000 UTC m=+782.986436475" lastFinishedPulling="2026-01-21 18:27:06.816533221 +0000 UTC m=+784.230495682" observedRunningTime="2026-01-21 18:27:10.953462896 +0000 UTC m=+788.367425367" watchObservedRunningTime="2026-01-21 18:27:10.95875007 +0000 UTC m=+788.372712521" Jan 21 18:27:12 crc kubenswrapper[5099]: I0121 18:27:12.004421 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerID="c3ff1fc5360b87ee1bd6177d6134b4ae006dc8e40e4911c684c083da33870229" exitCode=0 Jan 21 18:27:12 crc kubenswrapper[5099]: I0121 18:27:12.005005 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerDied","Data":"c3ff1fc5360b87ee1bd6177d6134b4ae006dc8e40e4911c684c083da33870229"} Jan 21 18:27:12 crc kubenswrapper[5099]: I0121 18:27:12.019126 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerID="ebb1152f074e976590046f030acceaf03c3c8311275f167b43f18aeea72aa328" exitCode=0 Jan 21 18:27:12 crc kubenswrapper[5099]: I0121 18:27:12.019219 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerDied","Data":"ebb1152f074e976590046f030acceaf03c3c8311275f167b43f18aeea72aa328"} Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.776048 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r"] Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.839657 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r"] Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.839951 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.845511 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.846037 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.846096 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-6p4xm\"" Jan 21 18:27:13 crc kubenswrapper[5099]: I0121 18:27:13.969132 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xflmm\" (UniqueName: \"kubernetes.io/projected/becb8e6d-88cd-4469-a912-f5e13a03e815-kube-api-access-xflmm\") pod \"obo-prometheus-operator-9bc85b4bf-7w26r\" (UID: \"becb8e6d-88cd-4469-a912-f5e13a03e815\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.020405 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.034195 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.038990 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-czrxm\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.039363 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.053566 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.067572 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.070526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xflmm\" (UniqueName: \"kubernetes.io/projected/becb8e6d-88cd-4469-a912-f5e13a03e815-kube-api-access-xflmm\") pod \"obo-prometheus-operator-9bc85b4bf-7w26r\" (UID: \"becb8e6d-88cd-4469-a912-f5e13a03e815\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.097812 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.113005 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.158889 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xflmm\" (UniqueName: \"kubernetes.io/projected/becb8e6d-88cd-4469-a912-f5e13a03e815-kube-api-access-xflmm\") pod \"obo-prometheus-operator-9bc85b4bf-7w26r\" (UID: \"becb8e6d-88cd-4469-a912-f5e13a03e815\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.163372 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.174268 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.174328 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.174349 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.174421 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.208967 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mjp6r"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.282627 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.282787 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.282868 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.282941 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.288204 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.288453 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/640f434d-3e8f-4429-a9b7-89a58100e49c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5\" (UID: \"640f434d-3e8f-4429-a9b7-89a58100e49c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.292617 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.293883 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.298319 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f5eafa3f-5eb2-445a-a0db-d33e4783861e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd\" (UID: \"f5eafa3f-5eb2-445a-a0db-d33e4783861e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.389164 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle\") pod \"0ca4259c-807e-4b9c-bff3-026450dc0a42\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.389327 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6dl\" (UniqueName: \"kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl\") pod \"0ca4259c-807e-4b9c-bff3-026450dc0a42\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.389371 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util\") pod \"0ca4259c-807e-4b9c-bff3-026450dc0a42\" (UID: \"0ca4259c-807e-4b9c-bff3-026450dc0a42\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.391205 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle" (OuterVolumeSpecName: "bundle") pod "0ca4259c-807e-4b9c-bff3-026450dc0a42" (UID: "0ca4259c-807e-4b9c-bff3-026450dc0a42"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.401526 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.402189 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl" (OuterVolumeSpecName: "kube-api-access-rf6dl") pod "0ca4259c-807e-4b9c-bff3-026450dc0a42" (UID: "0ca4259c-807e-4b9c-bff3-026450dc0a42"). InnerVolumeSpecName "kube-api-access-rf6dl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.405981 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.406028 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mjp6r"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.407109 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.410225 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.419426 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util" (OuterVolumeSpecName: "util") pod "0ca4259c-807e-4b9c-bff3-026450dc0a42" (UID: "0ca4259c-807e-4b9c-bff3-026450dc0a42"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.421284 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.433025 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-wcgwm\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.446353 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-9fvzk"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447131 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="util" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447146 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="util" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447154 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="pull" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447160 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="pull" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447170 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447176 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447190 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447195 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447211 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="util" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447218 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="util" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447224 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="pull" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447230 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="pull" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447349 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ca4259c-807e-4b9c-bff3-026450dc0a42" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.447368 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf4ea907-1f59-413d-bd0e-95da9a482151" containerName="extract" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.459860 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.475857 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-9rjhr\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.483093 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-9fvzk"] Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490248 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle\") pod \"cf4ea907-1f59-413d-bd0e-95da9a482151\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490377 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util\") pod \"cf4ea907-1f59-413d-bd0e-95da9a482151\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490459 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx654\" (UniqueName: \"kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654\") pod \"cf4ea907-1f59-413d-bd0e-95da9a482151\" (UID: \"cf4ea907-1f59-413d-bd0e-95da9a482151\") " Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490653 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fpd5\" (UniqueName: \"kubernetes.io/projected/02638344-e66d-4d9e-bea9-cdf3c1040c33-kube-api-access-5fpd5\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490782 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02638344-e66d-4d9e-bea9-cdf3c1040c33-observability-operator-tls\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490856 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rf6dl\" (UniqueName: \"kubernetes.io/projected/0ca4259c-807e-4b9c-bff3-026450dc0a42-kube-api-access-rf6dl\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490867 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-util\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.490877 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0ca4259c-807e-4b9c-bff3-026450dc0a42-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.492499 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle" (OuterVolumeSpecName: "bundle") pod "cf4ea907-1f59-413d-bd0e-95da9a482151" (UID: "cf4ea907-1f59-413d-bd0e-95da9a482151"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.493367 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.542809 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util" (OuterVolumeSpecName: "util") pod "cf4ea907-1f59-413d-bd0e-95da9a482151" (UID: "cf4ea907-1f59-413d-bd0e-95da9a482151"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.564943 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654" (OuterVolumeSpecName: "kube-api-access-sx654") pod "cf4ea907-1f59-413d-bd0e-95da9a482151" (UID: "cf4ea907-1f59-413d-bd0e-95da9a482151"). InnerVolumeSpecName "kube-api-access-sx654". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591532 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fpd5\" (UniqueName: \"kubernetes.io/projected/02638344-e66d-4d9e-bea9-cdf3c1040c33-kube-api-access-5fpd5\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591591 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv52x\" (UniqueName: \"kubernetes.io/projected/aceb441c-bf15-4d82-908b-d5300c9a526e-kube-api-access-cv52x\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591693 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aceb441c-bf15-4d82-908b-d5300c9a526e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591779 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02638344-e66d-4d9e-bea9-cdf3c1040c33-observability-operator-tls\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591859 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sx654\" (UniqueName: \"kubernetes.io/projected/cf4ea907-1f59-413d-bd0e-95da9a482151-kube-api-access-sx654\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591883 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.591895 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf4ea907-1f59-413d-bd0e-95da9a482151-util\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.607683 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02638344-e66d-4d9e-bea9-cdf3c1040c33-observability-operator-tls\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.612130 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.624618 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fpd5\" (UniqueName: \"kubernetes.io/projected/02638344-e66d-4d9e-bea9-cdf3c1040c33-kube-api-access-5fpd5\") pod \"observability-operator-85c68dddb-mjp6r\" (UID: \"02638344-e66d-4d9e-bea9-cdf3c1040c33\") " pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.745182 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cv52x\" (UniqueName: \"kubernetes.io/projected/aceb441c-bf15-4d82-908b-d5300c9a526e-kube-api-access-cv52x\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.745279 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aceb441c-bf15-4d82-908b-d5300c9a526e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.748837 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aceb441c-bf15-4d82-908b-d5300c9a526e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.754973 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.797309 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv52x\" (UniqueName: \"kubernetes.io/projected/aceb441c-bf15-4d82-908b-d5300c9a526e-kube-api-access-cv52x\") pod \"perses-operator-669c9f96b5-9fvzk\" (UID: \"aceb441c-bf15-4d82-908b-d5300c9a526e\") " pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:14 crc kubenswrapper[5099]: I0121 18:27:14.948498 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.266554 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.266597 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7" event={"ID":"cf4ea907-1f59-413d-bd0e-95da9a482151","Type":"ContainerDied","Data":"ea41a2a389c42f6ff4a0527241421fc832aa2e654cc2b30310433abbc05785d8"} Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.267381 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea41a2a389c42f6ff4a0527241421fc832aa2e654cc2b30310433abbc05785d8" Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.281400 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.282636 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f" event={"ID":"0ca4259c-807e-4b9c-bff3-026450dc0a42","Type":"ContainerDied","Data":"c3f332f3563768dd3da5dfb00e963f4268908b6b332d8c1f54e4aacfb2aa2efe"} Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.282689 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f332f3563768dd3da5dfb00e963f4268908b6b332d8c1f54e4aacfb2aa2efe" Jan 21 18:27:15 crc kubenswrapper[5099]: I0121 18:27:15.296205 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r"] Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.111938 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mjp6r"] Jan 21 18:27:16 crc kubenswrapper[5099]: W0121 18:27:16.146085 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02638344_e66d_4d9e_bea9_cdf3c1040c33.slice/crio-0c8c4e091ed0e5322bd15c5bc9210980f6c35e2fe27581d9fad5027015f65b3b WatchSource:0}: Error finding container 0c8c4e091ed0e5322bd15c5bc9210980f6c35e2fe27581d9fad5027015f65b3b: Status 404 returned error can't find the container with id 0c8c4e091ed0e5322bd15c5bc9210980f6c35e2fe27581d9fad5027015f65b3b Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.154154 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd"] Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.278142 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5"] Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.300379 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" event={"ID":"becb8e6d-88cd-4469-a912-f5e13a03e815","Type":"ContainerStarted","Data":"1f17e50de53e4c9a504556c08ccb84ef264383da033de907cd3cef2b030157fa"} Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.308951 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" event={"ID":"f5eafa3f-5eb2-445a-a0db-d33e4783861e","Type":"ContainerStarted","Data":"b92433c8ae63a317f1c1a58c025c45e8e285ffd99787376d856f16d728f0eb52"} Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.311064 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" event={"ID":"02638344-e66d-4d9e-bea9-cdf3c1040c33","Type":"ContainerStarted","Data":"0c8c4e091ed0e5322bd15c5bc9210980f6c35e2fe27581d9fad5027015f65b3b"} Jan 21 18:27:16 crc kubenswrapper[5099]: I0121 18:27:16.561339 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-9fvzk"] Jan 21 18:27:17 crc kubenswrapper[5099]: I0121 18:27:17.322110 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" event={"ID":"640f434d-3e8f-4429-a9b7-89a58100e49c","Type":"ContainerStarted","Data":"ed77a062b526665434a2d0a0fc508da30e9535e1dc522c6ebf684683787f7fa1"} Jan 21 18:27:17 crc kubenswrapper[5099]: I0121 18:27:17.326677 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" event={"ID":"aceb441c-bf15-4d82-908b-d5300c9a526e","Type":"ContainerStarted","Data":"ab7345ed84c359f5d566af72c36bd7ff8dcb25628214da9d25a50934671a5e3e"} Jan 21 18:27:17 crc kubenswrapper[5099]: I0121 18:27:17.484647 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:27:17 crc kubenswrapper[5099]: I0121 18:27:17.485192 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5c9sl" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" containerID="cri-o://64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" gracePeriod=2 Jan 21 18:27:18 crc kubenswrapper[5099]: I0121 18:27:18.345789 5099 generic.go:358] "Generic (PLEG): container finished" podID="eb80b964-3167-4125-8e33-214730cc9bdf" containerID="64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" exitCode=0 Jan 21 18:27:18 crc kubenswrapper[5099]: I0121 18:27:18.345869 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerDied","Data":"64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31"} Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.064554 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.064865 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.194225 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-77fcd4bd5f-lbdf6"] Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.239645 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.248725 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.249208 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-sppgn\"" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.250029 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.250420 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.257200 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-77fcd4bd5f-lbdf6"] Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.390502 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95r74\" (UniqueName: \"kubernetes.io/projected/4ac61964-c47e-486d-b2d3-13c9d16ae66c-kube-api-access-95r74\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.390572 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-webhook-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.390599 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-apiservice-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.492968 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-95r74\" (UniqueName: \"kubernetes.io/projected/4ac61964-c47e-486d-b2d3-13c9d16ae66c-kube-api-access-95r74\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.493040 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-webhook-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.493076 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-apiservice-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.507447 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-apiservice-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.509805 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ac61964-c47e-486d-b2d3-13c9d16ae66c-webhook-cert\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.518107 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-95r74\" (UniqueName: \"kubernetes.io/projected/4ac61964-c47e-486d-b2d3-13c9d16ae66c-kube-api-access-95r74\") pod \"elastic-operator-77fcd4bd5f-lbdf6\" (UID: \"4ac61964-c47e-486d-b2d3-13c9d16ae66c\") " pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:22 crc kubenswrapper[5099]: I0121 18:27:22.587043 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.467292 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-fs494"] Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.488980 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-fs494"] Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.489199 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.492174 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-lv4w6\"" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.515016 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq6xx\" (UniqueName: \"kubernetes.io/projected/252bafbe-0c68-4b4b-85f4-9f782a1b57b5-kube-api-access-rq6xx\") pod \"interconnect-operator-78b9bd8798-fs494\" (UID: \"252bafbe-0c68-4b4b-85f4-9f782a1b57b5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.617253 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rq6xx\" (UniqueName: \"kubernetes.io/projected/252bafbe-0c68-4b4b-85f4-9f782a1b57b5-kube-api-access-rq6xx\") pod \"interconnect-operator-78b9bd8798-fs494\" (UID: \"252bafbe-0c68-4b4b-85f4-9f782a1b57b5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.661545 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq6xx\" (UniqueName: \"kubernetes.io/projected/252bafbe-0c68-4b4b-85f4-9f782a1b57b5-kube-api-access-rq6xx\") pod \"interconnect-operator-78b9bd8798-fs494\" (UID: \"252bafbe-0c68-4b4b-85f4-9f782a1b57b5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" Jan 21 18:27:23 crc kubenswrapper[5099]: I0121 18:27:23.815212 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" Jan 21 18:27:24 crc kubenswrapper[5099]: E0121 18:27:24.408002 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31 is running failed: container process not found" containerID="64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:24 crc kubenswrapper[5099]: E0121 18:27:24.410032 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31 is running failed: container process not found" containerID="64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:24 crc kubenswrapper[5099]: E0121 18:27:24.412328 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31 is running failed: container process not found" containerID="64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:24 crc kubenswrapper[5099]: E0121 18:27:24.412691 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5c9sl" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" probeResult="unknown" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.226040 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.285675 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities\") pod \"eb80b964-3167-4125-8e33-214730cc9bdf\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.285803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwf5z\" (UniqueName: \"kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z\") pod \"eb80b964-3167-4125-8e33-214730cc9bdf\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.285835 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content\") pod \"eb80b964-3167-4125-8e33-214730cc9bdf\" (UID: \"eb80b964-3167-4125-8e33-214730cc9bdf\") " Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.287969 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities" (OuterVolumeSpecName: "utilities") pod "eb80b964-3167-4125-8e33-214730cc9bdf" (UID: "eb80b964-3167-4125-8e33-214730cc9bdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.299310 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z" (OuterVolumeSpecName: "kube-api-access-wwf5z") pod "eb80b964-3167-4125-8e33-214730cc9bdf" (UID: "eb80b964-3167-4125-8e33-214730cc9bdf"). InnerVolumeSpecName "kube-api-access-wwf5z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.389968 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.390022 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwf5z\" (UniqueName: \"kubernetes.io/projected/eb80b964-3167-4125-8e33-214730cc9bdf-kube-api-access-wwf5z\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.396777 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb80b964-3167-4125-8e33-214730cc9bdf" (UID: "eb80b964-3167-4125-8e33-214730cc9bdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.463890 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5c9sl" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.471023 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5c9sl" event={"ID":"eb80b964-3167-4125-8e33-214730cc9bdf","Type":"ContainerDied","Data":"3b4605971d126f394c9cbea8d797758dd887fef954910605139b9d8a01d9be25"} Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.471111 5099 scope.go:117] "RemoveContainer" containerID="64323e1dd8904b7df0f29cd25f50851387535180c8fe7f725369ab202e6a2c31" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.493933 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb80b964-3167-4125-8e33-214730cc9bdf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.506711 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.524102 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5c9sl"] Jan 21 18:27:31 crc kubenswrapper[5099]: I0121 18:27:31.927772 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" path="/var/lib/kubelet/pods/eb80b964-3167-4125-8e33-214730cc9bdf/volumes" Jan 21 18:27:38 crc kubenswrapper[5099]: I0121 18:27:38.800436 5099 scope.go:117] "RemoveContainer" containerID="a10a7ccfa31662d47a8afd228f3278527632b506c2d7be09f3aceb6f0b344a2e" Jan 21 18:27:38 crc kubenswrapper[5099]: I0121 18:27:38.850539 5099 scope.go:117] "RemoveContainer" containerID="47c99bfc01fbc29372214093249bd1b2b0b6a36d6093663637ce7d365d7f5a26" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.366601 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-fs494"] Jan 21 18:27:39 crc kubenswrapper[5099]: W0121 18:27:39.381395 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod252bafbe_0c68_4b4b_85f4_9f782a1b57b5.slice/crio-319c241e025f464e1f0a0732c706dcd80e90f0bfddb1004047998ab03c1cbe68 WatchSource:0}: Error finding container 319c241e025f464e1f0a0732c706dcd80e90f0bfddb1004047998ab03c1cbe68: Status 404 returned error can't find the container with id 319c241e025f464e1f0a0732c706dcd80e90f0bfddb1004047998ab03c1cbe68 Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.405581 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-77fcd4bd5f-lbdf6"] Jan 21 18:27:39 crc kubenswrapper[5099]: W0121 18:27:39.411749 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac61964_c47e_486d_b2d3_13c9d16ae66c.slice/crio-a002f6407208df33d7df61be13e8830229651dcf9d072975a8ba5b45abad4100 WatchSource:0}: Error finding container a002f6407208df33d7df61be13e8830229651dcf9d072975a8ba5b45abad4100: Status 404 returned error can't find the container with id a002f6407208df33d7df61be13e8830229651dcf9d072975a8ba5b45abad4100 Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.546608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" event={"ID":"02638344-e66d-4d9e-bea9-cdf3c1040c33","Type":"ContainerStarted","Data":"835cefa6e5f82a1facd68138c9603599f93857cebadda5021f864e44b3817ef3"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.547155 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.549015 5099 patch_prober.go:28] interesting pod/observability-operator-85c68dddb-mjp6r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/healthz\": dial tcp 10.217.0.49:8081: connect: connection refused" start-of-body= Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.549210 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" podUID="02638344-e66d-4d9e-bea9-cdf3c1040c33" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.49:8081/healthz\": dial tcp 10.217.0.49:8081: connect: connection refused" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.553393 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerStarted","Data":"974f4587151a697d5b26618ab49688b671725068142ab7184093b4e9050a0499"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.562811 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" event={"ID":"becb8e6d-88cd-4469-a912-f5e13a03e815","Type":"ContainerStarted","Data":"2e3e031818172efd11eae668238d4b0258bcd167e5ce7b88a402b3c4183a0a94"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.588994 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" podStartSLOduration=2.901483207 podStartE2EDuration="25.588970199s" podCreationTimestamp="2026-01-21 18:27:14 +0000 UTC" firstStartedPulling="2026-01-21 18:27:16.148110533 +0000 UTC m=+793.562072994" lastFinishedPulling="2026-01-21 18:27:38.835597525 +0000 UTC m=+816.249559986" observedRunningTime="2026-01-21 18:27:39.576291886 +0000 UTC m=+816.990254347" watchObservedRunningTime="2026-01-21 18:27:39.588970199 +0000 UTC m=+817.002932660" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.606215 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" event={"ID":"f5eafa3f-5eb2-445a-a0db-d33e4783861e","Type":"ContainerStarted","Data":"e285c5bbc5f5dfb958fd7d17a40f3da6c4272fd0de4c1fddab5d648a8f614a52"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.608226 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" event={"ID":"4ac61964-c47e-486d-b2d3-13c9d16ae66c","Type":"ContainerStarted","Data":"a002f6407208df33d7df61be13e8830229651dcf9d072975a8ba5b45abad4100"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.609100 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" event={"ID":"252bafbe-0c68-4b4b-85f4-9f782a1b57b5","Type":"ContainerStarted","Data":"319c241e025f464e1f0a0732c706dcd80e90f0bfddb1004047998ab03c1cbe68"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.615116 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" event={"ID":"640f434d-3e8f-4429-a9b7-89a58100e49c","Type":"ContainerStarted","Data":"8e13d33ceaa5fb1712d56ffa09659624537f185022ee87d1abe0de36defbba2e"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.622949 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" event={"ID":"aceb441c-bf15-4d82-908b-d5300c9a526e","Type":"ContainerStarted","Data":"d96a09b0b5098811a7242023e2431d5f6e49fbc4614a900348f4ab1bcee6c880"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.623714 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.624030 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7w26r" podStartSLOduration=3.507118328 podStartE2EDuration="26.624005094s" podCreationTimestamp="2026-01-21 18:27:13 +0000 UTC" firstStartedPulling="2026-01-21 18:27:15.619264126 +0000 UTC m=+793.033226587" lastFinishedPulling="2026-01-21 18:27:38.736150892 +0000 UTC m=+816.150113353" observedRunningTime="2026-01-21 18:27:39.622521252 +0000 UTC m=+817.036483713" watchObservedRunningTime="2026-01-21 18:27:39.624005094 +0000 UTC m=+817.037967555" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.628766 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerStarted","Data":"6a2fc6d96dbccbe7ee3df7bd598aa11388b73a0941a4987d0df5c3218c24b2af"} Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.649990 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd" podStartSLOduration=4.037443586 podStartE2EDuration="26.649968633s" podCreationTimestamp="2026-01-21 18:27:13 +0000 UTC" firstStartedPulling="2026-01-21 18:27:16.189121686 +0000 UTC m=+793.603084147" lastFinishedPulling="2026-01-21 18:27:38.801646733 +0000 UTC m=+816.215609194" observedRunningTime="2026-01-21 18:27:39.647145852 +0000 UTC m=+817.061108323" watchObservedRunningTime="2026-01-21 18:27:39.649968633 +0000 UTC m=+817.063931094" Jan 21 18:27:39 crc kubenswrapper[5099]: I0121 18:27:39.686613 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5" podStartSLOduration=2.860081335 podStartE2EDuration="25.686598673s" podCreationTimestamp="2026-01-21 18:27:14 +0000 UTC" firstStartedPulling="2026-01-21 18:27:16.300428265 +0000 UTC m=+793.714390726" lastFinishedPulling="2026-01-21 18:27:39.126945603 +0000 UTC m=+816.540908064" observedRunningTime="2026-01-21 18:27:39.679171762 +0000 UTC m=+817.093134223" watchObservedRunningTime="2026-01-21 18:27:39.686598673 +0000 UTC m=+817.100561124" Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.639135 5099 generic.go:358] "Generic (PLEG): container finished" podID="730e984f-9245-4a98-aefb-dda6686307f1" containerID="6a2fc6d96dbccbe7ee3df7bd598aa11388b73a0941a4987d0df5c3218c24b2af" exitCode=0 Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.639687 5099 generic.go:358] "Generic (PLEG): container finished" podID="730e984f-9245-4a98-aefb-dda6686307f1" containerID="cc49db713976be22d8c0d8eeb171abc7499d937e8677967e3a1c251c61b1cfd3" exitCode=0 Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.640703 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerDied","Data":"6a2fc6d96dbccbe7ee3df7bd598aa11388b73a0941a4987d0df5c3218c24b2af"} Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.640765 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerDied","Data":"cc49db713976be22d8c0d8eeb171abc7499d937e8677967e3a1c251c61b1cfd3"} Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.653128 5099 generic.go:358] "Generic (PLEG): container finished" podID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerID="974f4587151a697d5b26618ab49688b671725068142ab7184093b4e9050a0499" exitCode=0 Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.653244 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerDied","Data":"974f4587151a697d5b26618ab49688b671725068142ab7184093b4e9050a0499"} Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.656606 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-mjp6r" Jan 21 18:27:40 crc kubenswrapper[5099]: I0121 18:27:40.666234 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" podStartSLOduration=4.424475295 podStartE2EDuration="26.666216322s" podCreationTimestamp="2026-01-21 18:27:14 +0000 UTC" firstStartedPulling="2026-01-21 18:27:16.596092736 +0000 UTC m=+794.010055197" lastFinishedPulling="2026-01-21 18:27:38.837833763 +0000 UTC m=+816.251796224" observedRunningTime="2026-01-21 18:27:39.762623711 +0000 UTC m=+817.176586192" watchObservedRunningTime="2026-01-21 18:27:40.666216322 +0000 UTC m=+818.080178783" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.017805 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.034611 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cspf7\" (UniqueName: \"kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7\") pod \"730e984f-9245-4a98-aefb-dda6686307f1\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.034798 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util\") pod \"730e984f-9245-4a98-aefb-dda6686307f1\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.034852 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle\") pod \"730e984f-9245-4a98-aefb-dda6686307f1\" (UID: \"730e984f-9245-4a98-aefb-dda6686307f1\") " Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.036153 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle" (OuterVolumeSpecName: "bundle") pod "730e984f-9245-4a98-aefb-dda6686307f1" (UID: "730e984f-9245-4a98-aefb-dda6686307f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.044811 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7" (OuterVolumeSpecName: "kube-api-access-cspf7") pod "730e984f-9245-4a98-aefb-dda6686307f1" (UID: "730e984f-9245-4a98-aefb-dda6686307f1"). InnerVolumeSpecName "kube-api-access-cspf7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.054152 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util" (OuterVolumeSpecName: "util") pod "730e984f-9245-4a98-aefb-dda6686307f1" (UID: "730e984f-9245-4a98-aefb-dda6686307f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.136562 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cspf7\" (UniqueName: \"kubernetes.io/projected/730e984f-9245-4a98-aefb-dda6686307f1-kube-api-access-cspf7\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.136612 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-util\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.136623 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/730e984f-9245-4a98-aefb-dda6686307f1-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.692861 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" event={"ID":"730e984f-9245-4a98-aefb-dda6686307f1","Type":"ContainerDied","Data":"ab5d5e665a0a27e176ce664176397d551799b73f915d5646e5aa4cff6218f999"} Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.692954 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5d5e665a0a27e176ce664176397d551799b73f915d5646e5aa4cff6218f999" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.693397 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt" Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.701014 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerStarted","Data":"1c00e101619cb71fb86c534cc2b6150961cf99d0c4ad9d4317f21c10d755e903"} Jan 21 18:27:42 crc kubenswrapper[5099]: I0121 18:27:42.735180 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f58tc" podStartSLOduration=7.7813971429999995 podStartE2EDuration="36.735139724s" podCreationTimestamp="2026-01-21 18:27:06 +0000 UTC" firstStartedPulling="2026-01-21 18:27:09.784447115 +0000 UTC m=+787.198409576" lastFinishedPulling="2026-01-21 18:27:38.738189696 +0000 UTC m=+816.152152157" observedRunningTime="2026-01-21 18:27:42.72755925 +0000 UTC m=+820.141521721" watchObservedRunningTime="2026-01-21 18:27:42.735139724 +0000 UTC m=+820.149102185" Jan 21 18:27:46 crc kubenswrapper[5099]: I0121 18:27:46.803310 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:46 crc kubenswrapper[5099]: I0121 18:27:46.803887 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:46 crc kubenswrapper[5099]: I0121 18:27:46.857561 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:47 crc kubenswrapper[5099]: I0121 18:27:47.824593 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 18:27:49 crc kubenswrapper[5099]: I0121 18:27:49.322161 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 18:27:49 crc kubenswrapper[5099]: I0121 18:27:49.890060 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:27:49 crc kubenswrapper[5099]: I0121 18:27:49.890896 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mgt6l" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="registry-server" containerID="cri-o://79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" gracePeriod=2 Jan 21 18:27:50 crc kubenswrapper[5099]: I0121 18:27:50.782840 5099 generic.go:358] "Generic (PLEG): container finished" podID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerID="79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" exitCode=0 Jan 21 18:27:50 crc kubenswrapper[5099]: I0121 18:27:50.782934 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerDied","Data":"79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0"} Jan 21 18:27:51 crc kubenswrapper[5099]: E0121 18:27:51.039067 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0 is running failed: container process not found" containerID="79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:51 crc kubenswrapper[5099]: E0121 18:27:51.039907 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0 is running failed: container process not found" containerID="79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:51 crc kubenswrapper[5099]: E0121 18:27:51.043229 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0 is running failed: container process not found" containerID="79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 18:27:51 crc kubenswrapper[5099]: E0121 18:27:51.043335 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mgt6l" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="registry-server" probeResult="unknown" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126011 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4"] Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126806 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="util" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126831 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="util" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126844 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="extract-content" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126852 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="extract-content" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126867 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="extract-utilities" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126873 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="extract-utilities" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126884 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="extract" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126889 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="extract" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126907 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126913 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126925 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="pull" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.126931 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="pull" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.127052 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb80b964-3167-4125-8e33-214730cc9bdf" containerName="registry-server" Jan 21 18:27:51 crc kubenswrapper[5099]: I0121 18:27:51.127069 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="730e984f-9245-4a98-aefb-dda6686307f1" containerName="extract" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.064855 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.064931 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.426752 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.430062 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.431574 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.437133 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4"] Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.437286 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-9fvzk" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.437311 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.438641 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.438752 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574" gracePeriod=600 Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.446173 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-28j84\"" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.480675 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhxh\" (UniqueName: \"kubernetes.io/projected/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-kube-api-access-lqhxh\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.480834 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.583022 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.583656 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhxh\" (UniqueName: \"kubernetes.io/projected/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-kube-api-access-lqhxh\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.583766 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.633497 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhxh\" (UniqueName: \"kubernetes.io/projected/d4e91c57-3acd-4a30-acfb-c9ea3b6b7248-kube-api-access-lqhxh\") pod \"cert-manager-operator-controller-manager-64c74584c4-99vs4\" (UID: \"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:52 crc kubenswrapper[5099]: I0121 18:27:52.779521 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" Jan 21 18:27:53 crc kubenswrapper[5099]: I0121 18:27:53.803814 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574" exitCode=0 Jan 21 18:27:53 crc kubenswrapper[5099]: I0121 18:27:53.803895 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574"} Jan 21 18:27:53 crc kubenswrapper[5099]: I0121 18:27:53.803984 5099 scope.go:117] "RemoveContainer" containerID="73cbfaf70bcdfb205e6384ff89aff3781e54852fa1a2f68835e37c14a636880c" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.316044 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.359696 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities\") pod \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.360039 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content\") pod \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.360247 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxd2g\" (UniqueName: \"kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g\") pod \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\" (UID: \"e742bf4c-6a87-4ee9-9a51-1313603c3b18\") " Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.361814 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities" (OuterVolumeSpecName: "utilities") pod "e742bf4c-6a87-4ee9-9a51-1313603c3b18" (UID: "e742bf4c-6a87-4ee9-9a51-1313603c3b18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.364463 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4"] Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.377643 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g" (OuterVolumeSpecName: "kube-api-access-zxd2g") pod "e742bf4c-6a87-4ee9-9a51-1313603c3b18" (UID: "e742bf4c-6a87-4ee9-9a51-1313603c3b18"). InnerVolumeSpecName "kube-api-access-zxd2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.445274 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e742bf4c-6a87-4ee9-9a51-1313603c3b18" (UID: "e742bf4c-6a87-4ee9-9a51-1313603c3b18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.462779 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxd2g\" (UniqueName: \"kubernetes.io/projected/e742bf4c-6a87-4ee9-9a51-1313603c3b18-kube-api-access-zxd2g\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.462841 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.462876 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e742bf4c-6a87-4ee9-9a51-1313603c3b18-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.931981 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgt6l" event={"ID":"e742bf4c-6a87-4ee9-9a51-1313603c3b18","Type":"ContainerDied","Data":"033d2a43a2f868fde23efb8d671704cdbc44759941c8cf78d90e9e70af070f69"} Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.932615 5099 scope.go:117] "RemoveContainer" containerID="79b7739d560666ebbaaa77a4f5f84c3df347aa6f4223f5c4d4d218d6379ab9d0" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.932866 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgt6l" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.970808 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33"} Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.976959 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" event={"ID":"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248","Type":"ContainerStarted","Data":"2147a9a03e4ea84b28742129b21153d98f22bc67948ece5b317eacd127872098"} Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.985392 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" event={"ID":"4ac61964-c47e-486d-b2d3-13c9d16ae66c","Type":"ContainerStarted","Data":"f3750773a245975ce7a746f576015a45301f1f6a246c3eb83d671f3b2d5f7849"} Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.985501 5099 scope.go:117] "RemoveContainer" containerID="11b00e01f72e5410f717ed7544fee38cab08dd99a3e7953f9bce0152c673aaba" Jan 21 18:27:59 crc kubenswrapper[5099]: I0121 18:27:59.989792 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" event={"ID":"252bafbe-0c68-4b4b-85f4-9f782a1b57b5","Type":"ContainerStarted","Data":"aeab16309f86d7fac4bc1888c5ea7fc597a6bf248bd81f26c516b7060235951a"} Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.010784 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.019108 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mgt6l"] Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.024742 5099 scope.go:117] "RemoveContainer" containerID="5d05187f838e7781dd4ceb901382c757c88ee00147beacdb93dc12bbaaebac18" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.056944 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-77fcd4bd5f-lbdf6" podStartSLOduration=18.567025494 podStartE2EDuration="38.056915072s" podCreationTimestamp="2026-01-21 18:27:22 +0000 UTC" firstStartedPulling="2026-01-21 18:27:39.415770736 +0000 UTC m=+816.829733197" lastFinishedPulling="2026-01-21 18:27:58.905660314 +0000 UTC m=+836.319622775" observedRunningTime="2026-01-21 18:28:00.049271058 +0000 UTC m=+837.463233539" watchObservedRunningTime="2026-01-21 18:28:00.056915072 +0000 UTC m=+837.470877533" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.101139 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-fs494" podStartSLOduration=17.459326933 podStartE2EDuration="37.101121525s" podCreationTimestamp="2026-01-21 18:27:23 +0000 UTC" firstStartedPulling="2026-01-21 18:27:39.383689575 +0000 UTC m=+816.797652036" lastFinishedPulling="2026-01-21 18:27:59.025484167 +0000 UTC m=+836.439446628" observedRunningTime="2026-01-21 18:28:00.097316523 +0000 UTC m=+837.511278984" watchObservedRunningTime="2026-01-21 18:28:00.101121525 +0000 UTC m=+837.515083986" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.172601 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483668-8mxnt"] Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.173847 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="extract-utilities" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.173939 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="extract-utilities" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.174003 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="extract-content" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.174063 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="extract-content" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.174147 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="registry-server" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.174201 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="registry-server" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.174384 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" containerName="registry-server" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.186344 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483668-8mxnt"] Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.186982 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.189975 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.190307 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.191283 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.315619 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9n7\" (UniqueName: \"kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7\") pod \"auto-csr-approver-29483668-8mxnt\" (UID: \"7270666a-ed0b-4c75-b2ef-38c616af082a\") " pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.417217 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9n7\" (UniqueName: \"kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7\") pod \"auto-csr-approver-29483668-8mxnt\" (UID: \"7270666a-ed0b-4c75-b2ef-38c616af082a\") " pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.444194 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9n7\" (UniqueName: \"kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7\") pod \"auto-csr-approver-29483668-8mxnt\" (UID: \"7270666a-ed0b-4c75-b2ef-38c616af082a\") " pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.506779 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:00 crc kubenswrapper[5099]: I0121 18:28:00.909118 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483668-8mxnt"] Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.000354 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" event={"ID":"7270666a-ed0b-4c75-b2ef-38c616af082a","Type":"ContainerStarted","Data":"2c1c1bd847335a0092fc7ecb6870ecb1211f7957715bfcff04a0216911cf932d"} Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.141402 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.211273 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.211634 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.215320 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.217146 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.222209 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-r2ggd\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.222477 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.222658 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.222826 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.223313 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.223648 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.223855 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.333845 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334304 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334341 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334380 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334402 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334503 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334520 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334558 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334578 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334598 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/67bd99a7-8bd7-4673-a648-c41eee407194-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334621 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334640 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334669 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334707 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.334753 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437810 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437853 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437883 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437918 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437937 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437960 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.437974 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438003 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438022 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438040 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/67bd99a7-8bd7-4673-a648-c41eee407194-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438061 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438079 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438110 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438136 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.438158 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.441599 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.441601 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.441881 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.442003 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.442457 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.442878 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.444056 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.444535 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.448108 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.448171 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.449361 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.449508 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.449623 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/67bd99a7-8bd7-4673-a648-c41eee407194-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.450620 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.474017 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/67bd99a7-8bd7-4673-a648-c41eee407194-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"67bd99a7-8bd7-4673-a648-c41eee407194\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:01 crc kubenswrapper[5099]: I0121 18:28:01.544405 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:02 crc kubenswrapper[5099]: I0121 18:28:02.041993 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e742bf4c-6a87-4ee9-9a51-1313603c3b18" path="/var/lib/kubelet/pods/e742bf4c-6a87-4ee9-9a51-1313603c3b18/volumes" Jan 21 18:28:02 crc kubenswrapper[5099]: I0121 18:28:02.280793 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 18:28:03 crc kubenswrapper[5099]: I0121 18:28:03.048016 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"67bd99a7-8bd7-4673-a648-c41eee407194","Type":"ContainerStarted","Data":"9c306a893d0603dd00ee8a5e2690c0070b594ba7bf60a51bebd7e1eb49c977a6"} Jan 21 18:28:03 crc kubenswrapper[5099]: I0121 18:28:03.057092 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" event={"ID":"7270666a-ed0b-4c75-b2ef-38c616af082a","Type":"ContainerStarted","Data":"9db4f0aa917799e06e268f75868aab42ce96be234817c3baa1f0ccfacf6a0228"} Jan 21 18:28:03 crc kubenswrapper[5099]: I0121 18:28:03.078913 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" podStartSLOduration=2.142003262 podStartE2EDuration="3.078766278s" podCreationTimestamp="2026-01-21 18:28:00 +0000 UTC" firstStartedPulling="2026-01-21 18:28:00.950117469 +0000 UTC m=+838.364079930" lastFinishedPulling="2026-01-21 18:28:01.886880465 +0000 UTC m=+839.300842946" observedRunningTime="2026-01-21 18:28:03.073053165 +0000 UTC m=+840.487015626" watchObservedRunningTime="2026-01-21 18:28:03.078766278 +0000 UTC m=+840.492728739" Jan 21 18:28:04 crc kubenswrapper[5099]: I0121 18:28:04.065237 5099 generic.go:358] "Generic (PLEG): container finished" podID="7270666a-ed0b-4c75-b2ef-38c616af082a" containerID="9db4f0aa917799e06e268f75868aab42ce96be234817c3baa1f0ccfacf6a0228" exitCode=0 Jan 21 18:28:04 crc kubenswrapper[5099]: I0121 18:28:04.065340 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" event={"ID":"7270666a-ed0b-4c75-b2ef-38c616af082a","Type":"ContainerDied","Data":"9db4f0aa917799e06e268f75868aab42ce96be234817c3baa1f0ccfacf6a0228"} Jan 21 18:28:12 crc kubenswrapper[5099]: I0121 18:28:12.797332 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:12 crc kubenswrapper[5099]: I0121 18:28:12.832843 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9n7\" (UniqueName: \"kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7\") pod \"7270666a-ed0b-4c75-b2ef-38c616af082a\" (UID: \"7270666a-ed0b-4c75-b2ef-38c616af082a\") " Jan 21 18:28:12 crc kubenswrapper[5099]: I0121 18:28:12.853635 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7" (OuterVolumeSpecName: "kube-api-access-fb9n7") pod "7270666a-ed0b-4c75-b2ef-38c616af082a" (UID: "7270666a-ed0b-4c75-b2ef-38c616af082a"). InnerVolumeSpecName "kube-api-access-fb9n7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:28:12 crc kubenswrapper[5099]: I0121 18:28:12.936667 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fb9n7\" (UniqueName: \"kubernetes.io/projected/7270666a-ed0b-4c75-b2ef-38c616af082a-kube-api-access-fb9n7\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.177990 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" event={"ID":"7270666a-ed0b-4c75-b2ef-38c616af082a","Type":"ContainerDied","Data":"2c1c1bd847335a0092fc7ecb6870ecb1211f7957715bfcff04a0216911cf932d"} Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.178052 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483668-8mxnt" Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.178072 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1c1bd847335a0092fc7ecb6870ecb1211f7957715bfcff04a0216911cf932d" Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.861430 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483662-6z2vh"] Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.866995 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483662-6z2vh"] Jan 21 18:28:13 crc kubenswrapper[5099]: I0121 18:28:13.924836 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4c39e6-db4a-40b4-b7b5-a50799c8ba95" path="/var/lib/kubelet/pods/6a4c39e6-db4a-40b4-b7b5-a50799c8ba95/volumes" Jan 21 18:28:26 crc kubenswrapper[5099]: I0121 18:28:26.284280 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" event={"ID":"d4e91c57-3acd-4a30-acfb-c9ea3b6b7248","Type":"ContainerStarted","Data":"0e175884ed6976d805b87970d8cb60488f87e9d1a9dcd081767bc1dfe85b9c45"} Jan 21 18:28:26 crc kubenswrapper[5099]: I0121 18:28:26.302515 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-99vs4" podStartSLOduration=18.604932828 podStartE2EDuration="35.302493103s" podCreationTimestamp="2026-01-21 18:27:51 +0000 UTC" firstStartedPulling="2026-01-21 18:27:59.35653199 +0000 UTC m=+836.770494451" lastFinishedPulling="2026-01-21 18:28:16.054092265 +0000 UTC m=+853.468054726" observedRunningTime="2026-01-21 18:28:26.298482586 +0000 UTC m=+863.712445047" watchObservedRunningTime="2026-01-21 18:28:26.302493103 +0000 UTC m=+863.716455564" Jan 21 18:28:27 crc kubenswrapper[5099]: I0121 18:28:27.297447 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"67bd99a7-8bd7-4673-a648-c41eee407194","Type":"ContainerStarted","Data":"92bbb994965718347bf3f294092818cf203a51429bbe810ec60f7554a4960f70"} Jan 21 18:28:27 crc kubenswrapper[5099]: I0121 18:28:27.540768 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 18:28:27 crc kubenswrapper[5099]: I0121 18:28:27.595929 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.311586 5099 generic.go:358] "Generic (PLEG): container finished" podID="67bd99a7-8bd7-4673-a648-c41eee407194" containerID="92bbb994965718347bf3f294092818cf203a51429bbe810ec60f7554a4960f70" exitCode=0 Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.311677 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"67bd99a7-8bd7-4673-a648-c41eee407194","Type":"ContainerDied","Data":"92bbb994965718347bf3f294092818cf203a51429bbe810ec60f7554a4960f70"} Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.849590 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk"] Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.851032 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7270666a-ed0b-4c75-b2ef-38c616af082a" containerName="oc" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.851061 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7270666a-ed0b-4c75-b2ef-38c616af082a" containerName="oc" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.851197 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7270666a-ed0b-4c75-b2ef-38c616af082a" containerName="oc" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.854989 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.857873 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-f4ghb\"" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.858411 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.858497 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.860611 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk"] Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.909173 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:29 crc kubenswrapper[5099]: I0121 18:28:29.909714 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7576\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-kube-api-access-m7576\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.012018 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.012550 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7576\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-kube-api-access-m7576\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.035167 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.035509 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7576\" (UniqueName: \"kubernetes.io/projected/4bb52920-da03-43a3-bde0-0504738f45ab-kube-api-access-m7576\") pod \"cert-manager-cainjector-7dbf76d5c8-szdsk\" (UID: \"4bb52920-da03-43a3-bde0-0504738f45ab\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.060322 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pznvg"] Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.067259 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.070292 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-zk7fp\"" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.074628 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pznvg"] Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.114285 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.114378 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrk2\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-kube-api-access-ntrk2\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.170256 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.216028 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntrk2\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-kube-api-access-ntrk2\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.216447 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.242684 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.243517 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntrk2\" (UniqueName: \"kubernetes.io/projected/27547e6e-e7d9-4aed-9ce4-f2cf98352e1d-kube-api-access-ntrk2\") pod \"cert-manager-webhook-7894b5b9b4-pznvg\" (UID: \"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.332139 5099 generic.go:358] "Generic (PLEG): container finished" podID="67bd99a7-8bd7-4673-a648-c41eee407194" containerID="e0bf2523e908dd636a79fe7fd8935c44ca0c77e4f8536f3fb4e8cd7a7c97018c" exitCode=0 Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.332564 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"67bd99a7-8bd7-4673-a648-c41eee407194","Type":"ContainerDied","Data":"e0bf2523e908dd636a79fe7fd8935c44ca0c77e4f8536f3fb4e8cd7a7c97018c"} Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.414209 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.483702 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk"] Jan 21 18:28:30 crc kubenswrapper[5099]: I0121 18:28:30.704364 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pznvg"] Jan 21 18:28:30 crc kubenswrapper[5099]: W0121 18:28:30.712523 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27547e6e_e7d9_4aed_9ce4_f2cf98352e1d.slice/crio-891b566c0e46918e637050f13c8e360176ca5c4907b430afa20cc07f9c4bd3ca WatchSource:0}: Error finding container 891b566c0e46918e637050f13c8e360176ca5c4907b430afa20cc07f9c4bd3ca: Status 404 returned error can't find the container with id 891b566c0e46918e637050f13c8e360176ca5c4907b430afa20cc07f9c4bd3ca Jan 21 18:28:31 crc kubenswrapper[5099]: I0121 18:28:31.342480 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"67bd99a7-8bd7-4673-a648-c41eee407194","Type":"ContainerStarted","Data":"d490e89abea9d66b6caf233118c287519d15e94e78b7ad86b341c2ffdd1f4fee"} Jan 21 18:28:31 crc kubenswrapper[5099]: I0121 18:28:31.342707 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:28:31 crc kubenswrapper[5099]: I0121 18:28:31.347371 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" event={"ID":"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d","Type":"ContainerStarted","Data":"891b566c0e46918e637050f13c8e360176ca5c4907b430afa20cc07f9c4bd3ca"} Jan 21 18:28:31 crc kubenswrapper[5099]: I0121 18:28:31.349541 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" event={"ID":"4bb52920-da03-43a3-bde0-0504738f45ab","Type":"ContainerStarted","Data":"f0a0393a4c200f0fa24243db4621404a09ada0bf6862cc1af6659fc198164279"} Jan 21 18:28:31 crc kubenswrapper[5099]: I0121 18:28:31.383912 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.65501047 podStartE2EDuration="30.38387879s" podCreationTimestamp="2026-01-21 18:28:01 +0000 UTC" firstStartedPulling="2026-01-21 18:28:02.319916206 +0000 UTC m=+839.733878667" lastFinishedPulling="2026-01-21 18:28:26.048784526 +0000 UTC m=+863.462746987" observedRunningTime="2026-01-21 18:28:31.38021909 +0000 UTC m=+868.794181551" watchObservedRunningTime="2026-01-21 18:28:31.38387879 +0000 UTC m=+868.797841251" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.017133 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.044293 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.050904 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.050921 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.050964 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.050921 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.061391 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175424 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175506 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175546 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175840 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175899 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.175945 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176005 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zp9\" (UniqueName: \"kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176048 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176231 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176255 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176330 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.176378 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278149 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zp9\" (UniqueName: \"kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278227 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278286 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278307 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278445 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278530 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278602 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278659 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278723 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278772 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278843 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278865 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.278904 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.279419 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.280132 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.280662 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.280767 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.281059 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.281360 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.282153 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.282397 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.287888 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.297335 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.304257 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zp9\" (UniqueName: \"kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9\") pod \"service-telemetry-operator-1-build\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:39 crc kubenswrapper[5099]: I0121 18:28:39.372044 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:42 crc kubenswrapper[5099]: I0121 18:28:42.406836 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:28:42 crc kubenswrapper[5099]: W0121 18:28:42.422959 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2d325bb_8b43_4e45_a98a_a2e5b493f435.slice/crio-f9f92a70571e8cfd452c97c8f1e4762a10b45300067eddd76944b82edfb62acb WatchSource:0}: Error finding container f9f92a70571e8cfd452c97c8f1e4762a10b45300067eddd76944b82edfb62acb: Status 404 returned error can't find the container with id f9f92a70571e8cfd452c97c8f1e4762a10b45300067eddd76944b82edfb62acb Jan 21 18:28:42 crc kubenswrapper[5099]: I0121 18:28:42.450625 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d2d325bb-8b43-4e45-a98a-a2e5b493f435","Type":"ContainerStarted","Data":"f9f92a70571e8cfd452c97c8f1e4762a10b45300067eddd76944b82edfb62acb"} Jan 21 18:28:42 crc kubenswrapper[5099]: I0121 18:28:42.467480 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="67bd99a7-8bd7-4673-a648-c41eee407194" containerName="elasticsearch" probeResult="failure" output=< Jan 21 18:28:42 crc kubenswrapper[5099]: {"timestamp": "2026-01-21T18:28:42+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 18:28:42 crc kubenswrapper[5099]: > Jan 21 18:28:43 crc kubenswrapper[5099]: I0121 18:28:43.461799 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" event={"ID":"27547e6e-e7d9-4aed-9ce4-f2cf98352e1d","Type":"ContainerStarted","Data":"66bd8a409dc0a830c1e64e2e5fb3f3ca5b9de4ebe95cbebff5bec0300e5fb6dd"} Jan 21 18:28:43 crc kubenswrapper[5099]: I0121 18:28:43.462540 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:43 crc kubenswrapper[5099]: I0121 18:28:43.463600 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" event={"ID":"4bb52920-da03-43a3-bde0-0504738f45ab","Type":"ContainerStarted","Data":"c44f7cfe8612d19cb1351e0fc7b7af6e2bf72f9ee5aaf01636c7e5168d32d74e"} Jan 21 18:28:43 crc kubenswrapper[5099]: I0121 18:28:43.486185 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" podStartSLOduration=2.040958729 podStartE2EDuration="13.486158115s" podCreationTimestamp="2026-01-21 18:28:30 +0000 UTC" firstStartedPulling="2026-01-21 18:28:30.716370586 +0000 UTC m=+868.130333047" lastFinishedPulling="2026-01-21 18:28:42.161569972 +0000 UTC m=+879.575532433" observedRunningTime="2026-01-21 18:28:43.482917386 +0000 UTC m=+880.896879867" watchObservedRunningTime="2026-01-21 18:28:43.486158115 +0000 UTC m=+880.900120576" Jan 21 18:28:45 crc kubenswrapper[5099]: I0121 18:28:45.506749 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-szdsk" podStartSLOduration=4.779128762 podStartE2EDuration="16.506703644s" podCreationTimestamp="2026-01-21 18:28:29 +0000 UTC" firstStartedPulling="2026-01-21 18:28:30.495266751 +0000 UTC m=+867.909229212" lastFinishedPulling="2026-01-21 18:28:42.222841633 +0000 UTC m=+879.636804094" observedRunningTime="2026-01-21 18:28:45.504410928 +0000 UTC m=+882.918373409" watchObservedRunningTime="2026-01-21 18:28:45.506703644 +0000 UTC m=+882.920666105" Jan 21 18:28:47 crc kubenswrapper[5099]: I0121 18:28:47.478819 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="67bd99a7-8bd7-4673-a648-c41eee407194" containerName="elasticsearch" probeResult="failure" output=< Jan 21 18:28:47 crc kubenswrapper[5099]: {"timestamp": "2026-01-21T18:28:47+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 18:28:47 crc kubenswrapper[5099]: > Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.190608 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-dkcxp"] Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.198680 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-dkcxp"] Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.198825 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.202980 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-fv42h\"" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.281635 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcrg\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-kube-api-access-bzcrg\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.281724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-bound-sa-token\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.343464 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.382852 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcrg\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-kube-api-access-bzcrg\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.382929 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-bound-sa-token\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.419549 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-bound-sa-token\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.421028 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcrg\" (UniqueName: \"kubernetes.io/projected/6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb-kube-api-access-bzcrg\") pod \"cert-manager-858d87f86b-dkcxp\" (UID: \"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb\") " pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.478926 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pznvg" Jan 21 18:28:49 crc kubenswrapper[5099]: I0121 18:28:49.530876 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-dkcxp" Jan 21 18:28:51 crc kubenswrapper[5099]: I0121 18:28:51.464853 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.441796 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="67bd99a7-8bd7-4673-a648-c41eee407194" containerName="elasticsearch" probeResult="failure" output=< Jan 21 18:28:52 crc kubenswrapper[5099]: {"timestamp": "2026-01-21T18:28:52+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 18:28:52 crc kubenswrapper[5099]: > Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.546252 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.548883 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.549292 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.550092 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.560193 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.650671 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.650769 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.650952 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651007 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651380 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651489 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651687 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651815 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651849 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651878 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651953 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.651993 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzh7\" (UniqueName: \"kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753410 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753523 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753571 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753600 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753638 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753659 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753675 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753694 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753722 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753756 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzh7\" (UniqueName: \"kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.753785 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.754480 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.754820 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.754986 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755499 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755650 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755704 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755869 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755872 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.755961 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.767888 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.769365 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.775346 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzh7\" (UniqueName: \"kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7\") pod \"service-telemetry-operator-2-build\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:52 crc kubenswrapper[5099]: I0121 18:28:52.871689 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:28:56 crc kubenswrapper[5099]: I0121 18:28:56.098959 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 18:28:56 crc kubenswrapper[5099]: I0121 18:28:56.122402 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-dkcxp"] Jan 21 18:28:56 crc kubenswrapper[5099]: I0121 18:28:56.577591 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerStarted","Data":"c584fcee4864730e932626c8903071b3e5830d396fe2226df36e0d15b714f4b6"} Jan 21 18:28:56 crc kubenswrapper[5099]: I0121 18:28:56.578820 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-dkcxp" event={"ID":"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb","Type":"ContainerStarted","Data":"a67f45a38420f7479900c51f11b2371583d9f349d6ad8e0079e2c46885f28023"} Jan 21 18:28:57 crc kubenswrapper[5099]: I0121 18:28:57.449301 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="67bd99a7-8bd7-4673-a648-c41eee407194" containerName="elasticsearch" probeResult="failure" output=< Jan 21 18:28:57 crc kubenswrapper[5099]: {"timestamp": "2026-01-21T18:28:57+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 18:28:57 crc kubenswrapper[5099]: > Jan 21 18:28:58 crc kubenswrapper[5099]: I0121 18:28:58.599375 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d2d325bb-8b43-4e45-a98a-a2e5b493f435","Type":"ContainerStarted","Data":"e044b6d8fdf6e391a4994512b652d54ce2749ff916651dd7f61ebef6b50669ed"} Jan 21 18:28:58 crc kubenswrapper[5099]: I0121 18:28:58.599489 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" containerName="manage-dockerfile" containerID="cri-o://e044b6d8fdf6e391a4994512b652d54ce2749ff916651dd7f61ebef6b50669ed" gracePeriod=30 Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.608002 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerStarted","Data":"69ff368e5a3d855a920e57677997d1832764b47c591ecec2f1b3dc6e41e8d326"} Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.610379 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-dkcxp" event={"ID":"6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb","Type":"ContainerStarted","Data":"f742a07f2b35a015abed643d045deac5cd3d40ca8f5938c6efcab53bf0ea9075"} Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.618743 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d2d325bb-8b43-4e45-a98a-a2e5b493f435/manage-dockerfile/0.log" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.618801 5099 generic.go:358] "Generic (PLEG): container finished" podID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" containerID="e044b6d8fdf6e391a4994512b652d54ce2749ff916651dd7f61ebef6b50669ed" exitCode=1 Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.618894 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d2d325bb-8b43-4e45-a98a-a2e5b493f435","Type":"ContainerDied","Data":"e044b6d8fdf6e391a4994512b652d54ce2749ff916651dd7f61ebef6b50669ed"} Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.747115 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d2d325bb-8b43-4e45-a98a-a2e5b493f435/manage-dockerfile/0.log" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.747224 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.748238 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-dkcxp" podStartSLOduration=10.748212774 podStartE2EDuration="10.748212774s" podCreationTimestamp="2026-01-21 18:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:28:59.745857835 +0000 UTC m=+897.159820306" watchObservedRunningTime="2026-01-21 18:28:59.748212774 +0000 UTC m=+897.162175245" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.779693 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780218 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.779879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780301 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780330 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780368 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780397 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780429 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780487 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780543 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zp9\" (UniqueName: \"kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780599 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780642 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780875 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.780950 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root\") pod \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\" (UID: \"d2d325bb-8b43-4e45-a98a-a2e5b493f435\") " Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781217 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781391 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781416 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2d325bb-8b43-4e45-a98a-a2e5b493f435-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781543 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781708 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.781935 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.782030 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.782140 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.786880 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.793489 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.799981 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.814993 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9" (OuterVolumeSpecName: "kube-api-access-p8zp9") pod "d2d325bb-8b43-4e45-a98a-a2e5b493f435" (UID: "d2d325bb-8b43-4e45-a98a-a2e5b493f435"). InnerVolumeSpecName "kube-api-access-p8zp9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882909 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882942 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882954 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882962 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882972 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882981 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882989 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d2d325bb-8b43-4e45-a98a-a2e5b493f435-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.882997 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8zp9\" (UniqueName: \"kubernetes.io/projected/d2d325bb-8b43-4e45-a98a-a2e5b493f435-kube-api-access-p8zp9\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.883005 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2d325bb-8b43-4e45-a98a-a2e5b493f435-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:28:59 crc kubenswrapper[5099]: I0121 18:28:59.883013 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d325bb-8b43-4e45-a98a-a2e5b493f435-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.787294 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d2d325bb-8b43-4e45-a98a-a2e5b493f435/manage-dockerfile/0.log" Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.787599 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d2d325bb-8b43-4e45-a98a-a2e5b493f435","Type":"ContainerDied","Data":"f9f92a70571e8cfd452c97c8f1e4762a10b45300067eddd76944b82edfb62acb"} Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.787847 5099 scope.go:117] "RemoveContainer" containerID="e044b6d8fdf6e391a4994512b652d54ce2749ff916651dd7f61ebef6b50669ed" Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.788480 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.824410 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:29:00 crc kubenswrapper[5099]: I0121 18:29:00.837242 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 18:29:01 crc kubenswrapper[5099]: I0121 18:29:01.925030 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" path="/var/lib/kubelet/pods/d2d325bb-8b43-4e45-a98a-a2e5b493f435/volumes" Jan 21 18:29:02 crc kubenswrapper[5099]: I0121 18:29:02.573097 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 18:29:04 crc kubenswrapper[5099]: I0121 18:29:04.346557 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:29:04 crc kubenswrapper[5099]: I0121 18:29:04.346558 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:29:04 crc kubenswrapper[5099]: I0121 18:29:04.361323 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:29:04 crc kubenswrapper[5099]: I0121 18:29:04.361567 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:29:07 crc kubenswrapper[5099]: I0121 18:29:07.851290 5099 generic.go:358] "Generic (PLEG): container finished" podID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerID="69ff368e5a3d855a920e57677997d1832764b47c591ecec2f1b3dc6e41e8d326" exitCode=0 Jan 21 18:29:07 crc kubenswrapper[5099]: I0121 18:29:07.851413 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerDied","Data":"69ff368e5a3d855a920e57677997d1832764b47c591ecec2f1b3dc6e41e8d326"} Jan 21 18:29:08 crc kubenswrapper[5099]: I0121 18:29:08.863240 5099 generic.go:358] "Generic (PLEG): container finished" podID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerID="356bc96960a9d71a199c8be5419f3008b7ce2a8243e9be29efad677fcb07d7da" exitCode=0 Jan 21 18:29:08 crc kubenswrapper[5099]: I0121 18:29:08.863335 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerDied","Data":"356bc96960a9d71a199c8be5419f3008b7ce2a8243e9be29efad677fcb07d7da"} Jan 21 18:29:08 crc kubenswrapper[5099]: I0121 18:29:08.940675 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_356d3d5a-88fb-4d4c-bc79-cc28af1ac489/manage-dockerfile/0.log" Jan 21 18:29:09 crc kubenswrapper[5099]: I0121 18:29:09.875521 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerStarted","Data":"d038ec78d813821d35ce242e33ac821a726152e5277a370ff8eb2b29a7d4a618"} Jan 21 18:29:09 crc kubenswrapper[5099]: I0121 18:29:09.908545 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=18.908511899 podStartE2EDuration="18.908511899s" podCreationTimestamp="2026-01-21 18:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:29:09.904759108 +0000 UTC m=+907.318721589" watchObservedRunningTime="2026-01-21 18:29:09.908511899 +0000 UTC m=+907.322474360" Jan 21 18:29:16 crc kubenswrapper[5099]: I0121 18:29:16.141097 5099 scope.go:117] "RemoveContainer" containerID="a188f09831633ff3332f76f220f223accd559d5d7c87ade9a5f39b641e4d24ac" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.150807 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv"] Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.152805 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" containerName="manage-dockerfile" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.152836 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" containerName="manage-dockerfile" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.152990 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2d325bb-8b43-4e45-a98a-a2e5b493f435" containerName="manage-dockerfile" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.947651 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483670-ht5fw"] Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.948035 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.953113 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.954405 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.954237 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv"] Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.954896 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483670-ht5fw"] Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.955024 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.956087 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.957626 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:30:00 crc kubenswrapper[5099]: I0121 18:30:00.958569 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.063634 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26gr\" (UniqueName: \"kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr\") pod \"auto-csr-approver-29483670-ht5fw\" (UID: \"ae7199af-7d8a-4536-a3d5-82da6a93ce67\") " pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.063839 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.064135 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.064224 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq76r\" (UniqueName: \"kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.166785 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t26gr\" (UniqueName: \"kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr\") pod \"auto-csr-approver-29483670-ht5fw\" (UID: \"ae7199af-7d8a-4536-a3d5-82da6a93ce67\") " pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.168173 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.168391 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.168546 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pq76r\" (UniqueName: \"kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.170508 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.179326 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.188534 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26gr\" (UniqueName: \"kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr\") pod \"auto-csr-approver-29483670-ht5fw\" (UID: \"ae7199af-7d8a-4536-a3d5-82da6a93ce67\") " pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.189679 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq76r\" (UniqueName: \"kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r\") pod \"collect-profiles-29483670-nvvfv\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.285844 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.304002 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.664141 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv"] Jan 21 18:30:01 crc kubenswrapper[5099]: I0121 18:30:01.887894 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483670-ht5fw"] Jan 21 18:30:01 crc kubenswrapper[5099]: W0121 18:30:01.916386 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae7199af_7d8a_4536_a3d5_82da6a93ce67.slice/crio-d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d WatchSource:0}: Error finding container d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d: Status 404 returned error can't find the container with id d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d Jan 21 18:30:02 crc kubenswrapper[5099]: I0121 18:30:02.476472 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" event={"ID":"9c4af3f3-c4b5-4dad-b8df-57771df1cab0","Type":"ContainerDied","Data":"c36363a9675953e73b5b1b57297647794be5a4cdc189c13612196c3191395ef9"} Jan 21 18:30:02 crc kubenswrapper[5099]: I0121 18:30:02.476501 5099 generic.go:358] "Generic (PLEG): container finished" podID="9c4af3f3-c4b5-4dad-b8df-57771df1cab0" containerID="c36363a9675953e73b5b1b57297647794be5a4cdc189c13612196c3191395ef9" exitCode=0 Jan 21 18:30:02 crc kubenswrapper[5099]: I0121 18:30:02.476667 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" event={"ID":"9c4af3f3-c4b5-4dad-b8df-57771df1cab0","Type":"ContainerStarted","Data":"4994b2a06a4fedef48fc24c7bd79568e03dd7cb29b5b8bf88130222fed05fee6"} Jan 21 18:30:02 crc kubenswrapper[5099]: I0121 18:30:02.478663 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" event={"ID":"ae7199af-7d8a-4536-a3d5-82da6a93ce67","Type":"ContainerStarted","Data":"d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d"} Jan 21 18:30:03 crc kubenswrapper[5099]: I0121 18:30:03.908009 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.062361 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume\") pod \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.062797 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume\") pod \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.062971 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq76r\" (UniqueName: \"kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r\") pod \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\" (UID: \"9c4af3f3-c4b5-4dad-b8df-57771df1cab0\") " Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.065004 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c4af3f3-c4b5-4dad-b8df-57771df1cab0" (UID: "9c4af3f3-c4b5-4dad-b8df-57771df1cab0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.087898 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c4af3f3-c4b5-4dad-b8df-57771df1cab0" (UID: "9c4af3f3-c4b5-4dad-b8df-57771df1cab0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.088042 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r" (OuterVolumeSpecName: "kube-api-access-pq76r") pod "9c4af3f3-c4b5-4dad-b8df-57771df1cab0" (UID: "9c4af3f3-c4b5-4dad-b8df-57771df1cab0"). InnerVolumeSpecName "kube-api-access-pq76r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.165209 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.165273 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.165286 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pq76r\" (UniqueName: \"kubernetes.io/projected/9c4af3f3-c4b5-4dad-b8df-57771df1cab0-kube-api-access-pq76r\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.501053 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" event={"ID":"9c4af3f3-c4b5-4dad-b8df-57771df1cab0","Type":"ContainerDied","Data":"4994b2a06a4fedef48fc24c7bd79568e03dd7cb29b5b8bf88130222fed05fee6"} Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.501137 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4994b2a06a4fedef48fc24c7bd79568e03dd7cb29b5b8bf88130222fed05fee6" Jan 21 18:30:04 crc kubenswrapper[5099]: I0121 18:30:04.501141 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv" Jan 21 18:30:05 crc kubenswrapper[5099]: I0121 18:30:05.512936 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" event={"ID":"ae7199af-7d8a-4536-a3d5-82da6a93ce67","Type":"ContainerStarted","Data":"015241055a520841093f569cf85743136964fd459b52302e80e0c34feacf5659"} Jan 21 18:30:08 crc kubenswrapper[5099]: I0121 18:30:08.543008 5099 generic.go:358] "Generic (PLEG): container finished" podID="ae7199af-7d8a-4536-a3d5-82da6a93ce67" containerID="015241055a520841093f569cf85743136964fd459b52302e80e0c34feacf5659" exitCode=0 Jan 21 18:30:08 crc kubenswrapper[5099]: I0121 18:30:08.543111 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" event={"ID":"ae7199af-7d8a-4536-a3d5-82da6a93ce67","Type":"ContainerDied","Data":"015241055a520841093f569cf85743136964fd459b52302e80e0c34feacf5659"} Jan 21 18:30:09 crc kubenswrapper[5099]: I0121 18:30:09.830781 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:09 crc kubenswrapper[5099]: I0121 18:30:09.890755 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t26gr\" (UniqueName: \"kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr\") pod \"ae7199af-7d8a-4536-a3d5-82da6a93ce67\" (UID: \"ae7199af-7d8a-4536-a3d5-82da6a93ce67\") " Jan 21 18:30:09 crc kubenswrapper[5099]: I0121 18:30:09.901446 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr" (OuterVolumeSpecName: "kube-api-access-t26gr") pod "ae7199af-7d8a-4536-a3d5-82da6a93ce67" (UID: "ae7199af-7d8a-4536-a3d5-82da6a93ce67"). InnerVolumeSpecName "kube-api-access-t26gr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:30:09 crc kubenswrapper[5099]: I0121 18:30:09.992822 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t26gr\" (UniqueName: \"kubernetes.io/projected/ae7199af-7d8a-4536-a3d5-82da6a93ce67-kube-api-access-t26gr\") on node \"crc\" DevicePath \"\"" Jan 21 18:30:10 crc kubenswrapper[5099]: I0121 18:30:10.564156 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" Jan 21 18:30:10 crc kubenswrapper[5099]: I0121 18:30:10.564191 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483670-ht5fw" event={"ID":"ae7199af-7d8a-4536-a3d5-82da6a93ce67","Type":"ContainerDied","Data":"d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d"} Jan 21 18:30:10 crc kubenswrapper[5099]: I0121 18:30:10.564974 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24b9b6544cdf22cbe742ed43f9852ba688eae2bbf6c460d90cec49a0141e47d" Jan 21 18:30:10 crc kubenswrapper[5099]: I0121 18:30:10.917637 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483664-74kt9"] Jan 21 18:30:10 crc kubenswrapper[5099]: I0121 18:30:10.923182 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483664-74kt9"] Jan 21 18:30:11 crc kubenswrapper[5099]: I0121 18:30:11.926481 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2142c023-8835-4160-a6f6-fccfb6a68ba7" path="/var/lib/kubelet/pods/2142c023-8835-4160-a6f6-fccfb6a68ba7/volumes" Jan 21 18:30:16 crc kubenswrapper[5099]: I0121 18:30:16.406872 5099 scope.go:117] "RemoveContainer" containerID="bb021283917d73fe471a685d1f7d607443d06de7b8893b1a750aba5095ac3555" Jan 21 18:30:22 crc kubenswrapper[5099]: I0121 18:30:22.064576 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:30:22 crc kubenswrapper[5099]: I0121 18:30:22.065357 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:30:52 crc kubenswrapper[5099]: I0121 18:30:52.065538 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:30:52 crc kubenswrapper[5099]: I0121 18:30:52.066260 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:31:20 crc kubenswrapper[5099]: I0121 18:31:20.349658 5099 generic.go:358] "Generic (PLEG): container finished" podID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerID="d038ec78d813821d35ce242e33ac821a726152e5277a370ff8eb2b29a7d4a618" exitCode=0 Jan 21 18:31:20 crc kubenswrapper[5099]: I0121 18:31:20.349774 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerDied","Data":"d038ec78d813821d35ce242e33ac821a726152e5277a370ff8eb2b29a7d4a618"} Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.657505 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.736090 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.736179 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.736204 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.736290 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.736323 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737082 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737130 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvzh7\" (UniqueName: \"kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737136 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737172 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737208 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737264 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737271 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737318 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737402 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push\") pod \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\" (UID: \"356d3d5a-88fb-4d4c-bc79-cc28af1ac489\") " Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737321 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737552 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.737828 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738020 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738043 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738053 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738063 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738072 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.738342 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.744046 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.744061 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7" (OuterVolumeSpecName: "kube-api-access-kvzh7") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "kube-api-access-kvzh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.744129 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.779635 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.839553 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.839615 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.839633 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.839644 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kvzh7\" (UniqueName: \"kubernetes.io/projected/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-kube-api-access-kvzh7\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.839654 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.918953 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:21 crc kubenswrapper[5099]: I0121 18:31:21.941575 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.065090 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.065264 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.065342 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.066349 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.066428 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33" gracePeriod=600 Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.213099 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.369211 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"356d3d5a-88fb-4d4c-bc79-cc28af1ac489","Type":"ContainerDied","Data":"c584fcee4864730e932626c8903071b3e5830d396fe2226df36e0d15b714f4b6"} Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.369290 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c584fcee4864730e932626c8903071b3e5830d396fe2226df36e0d15b714f4b6" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.369508 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.373946 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33" exitCode=0 Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.374147 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33"} Jan 21 18:31:22 crc kubenswrapper[5099]: I0121 18:31:22.374192 5099 scope.go:117] "RemoveContainer" containerID="51ffcc3cf1aa6ab3bfdb8cd2b8bb98ce9b9992d447364b1a4c0eb51c24a6f574" Jan 21 18:31:23 crc kubenswrapper[5099]: I0121 18:31:23.385088 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f"} Jan 21 18:31:23 crc kubenswrapper[5099]: I0121 18:31:23.685178 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "356d3d5a-88fb-4d4c-bc79-cc28af1ac489" (UID: "356d3d5a-88fb-4d4c-bc79-cc28af1ac489"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:23 crc kubenswrapper[5099]: I0121 18:31:23.772634 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/356d3d5a-88fb-4d4c-bc79-cc28af1ac489-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.554318 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555666 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae7199af-7d8a-4536-a3d5-82da6a93ce67" containerName="oc" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555684 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7199af-7d8a-4536-a3d5-82da6a93ce67" containerName="oc" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555695 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c4af3f3-c4b5-4dad-b8df-57771df1cab0" containerName="collect-profiles" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555702 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4af3f3-c4b5-4dad-b8df-57771df1cab0" containerName="collect-profiles" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555720 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="git-clone" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555726 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="git-clone" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555756 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="manage-dockerfile" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555764 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="manage-dockerfile" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555779 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="docker-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555787 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="docker-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555926 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c4af3f3-c4b5-4dad-b8df-57771df1cab0" containerName="collect-profiles" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555939 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae7199af-7d8a-4536-a3d5-82da6a93ce67" containerName="oc" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.555953 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="356d3d5a-88fb-4d4c-bc79-cc28af1ac489" containerName="docker-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.560244 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.567260 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.567593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.572869 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.574904 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.575214 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723065 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqxw\" (UniqueName: \"kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723140 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723475 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723591 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723655 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723794 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723881 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.723958 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.724118 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.724168 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.724251 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.724293 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826141 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826224 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826253 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826966 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.826969 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827052 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827166 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827393 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827435 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827459 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827483 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqxw\" (UniqueName: \"kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827502 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827952 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.827999 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.828024 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.828166 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.828314 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.828420 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.829177 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.837411 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.837604 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.852291 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqxw\" (UniqueName: \"kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw\") pod \"smart-gateway-operator-1-build\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:26 crc kubenswrapper[5099]: I0121 18:31:26.884200 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:27 crc kubenswrapper[5099]: I0121 18:31:27.157320 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:27 crc kubenswrapper[5099]: I0121 18:31:27.418559 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cf7a2494-f386-4b84-910b-40a693faa3a4","Type":"ContainerStarted","Data":"0a806038970f685c653b0913edcc02071804997a3d54f6135c10c7565bfb45c7"} Jan 21 18:31:28 crc kubenswrapper[5099]: I0121 18:31:28.428948 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerID="6a60a0247f99b5a21556bcc94655560b88b7cabe130d0a9a90ab210cc4a230da" exitCode=0 Jan 21 18:31:28 crc kubenswrapper[5099]: I0121 18:31:28.429050 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cf7a2494-f386-4b84-910b-40a693faa3a4","Type":"ContainerDied","Data":"6a60a0247f99b5a21556bcc94655560b88b7cabe130d0a9a90ab210cc4a230da"} Jan 21 18:31:29 crc kubenswrapper[5099]: I0121 18:31:29.439715 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cf7a2494-f386-4b84-910b-40a693faa3a4","Type":"ContainerStarted","Data":"253e1503fa905d39e4d72c59493326a20d77070c465a640f447a85a9567c18f0"} Jan 21 18:31:29 crc kubenswrapper[5099]: I0121 18:31:29.469618 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.469597749 podStartE2EDuration="3.469597749s" podCreationTimestamp="2026-01-21 18:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:31:29.463511542 +0000 UTC m=+1046.877474023" watchObservedRunningTime="2026-01-21 18:31:29.469597749 +0000 UTC m=+1046.883560210" Jan 21 18:31:37 crc kubenswrapper[5099]: I0121 18:31:37.071221 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:37 crc kubenswrapper[5099]: I0121 18:31:37.072178 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="docker-build" containerID="cri-o://253e1503fa905d39e4d72c59493326a20d77070c465a640f447a85a9567c18f0" gracePeriod=30 Jan 21 18:31:38 crc kubenswrapper[5099]: I0121 18:31:38.733612 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.592241 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.596684 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.596780 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.596875 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.604376 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696190 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696254 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696281 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696304 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696525 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696666 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696789 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696872 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696927 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.696979 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.697137 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj78h\" (UniqueName: \"kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.697208 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.798959 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799057 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799095 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799148 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tj78h\" (UniqueName: \"kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799208 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799334 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799364 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799403 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799528 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799661 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799717 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.799976 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800071 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800084 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800117 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800194 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800343 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.800578 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.801009 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.801902 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.806882 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.807051 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.824248 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj78h\" (UniqueName: \"kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h\") pod \"smart-gateway-operator-2-build\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:40 crc kubenswrapper[5099]: I0121 18:31:40.918828 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:31:41 crc kubenswrapper[5099]: I0121 18:31:41.166457 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.083082 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_cf7a2494-f386-4b84-910b-40a693faa3a4/docker-build/0.log" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.084794 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136533 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136724 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqxw\" (UniqueName: \"kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136781 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136912 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.136991 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.137086 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.137138 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.137981 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138027 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138103 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138154 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138245 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138308 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138380 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run\") pod \"cf7a2494-f386-4b84-910b-40a693faa3a4\" (UID: \"cf7a2494-f386-4b84-910b-40a693faa3a4\") " Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138957 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138986 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.138998 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.139087 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.139121 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.139316 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.140083 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.140843 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.144018 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw" (OuterVolumeSpecName: "kube-api-access-rfqxw") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "kube-api-access-rfqxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.144753 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.144797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240058 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240441 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240452 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/cf7a2494-f386-4b84-910b-40a693faa3a4-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240465 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf7a2494-f386-4b84-910b-40a693faa3a4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240475 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf7a2494-f386-4b84-910b-40a693faa3a4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240483 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240491 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.240502 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rfqxw\" (UniqueName: \"kubernetes.io/projected/cf7a2494-f386-4b84-910b-40a693faa3a4-kube-api-access-rfqxw\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.289436 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cf7a2494-f386-4b84-910b-40a693faa3a4" (UID: "cf7a2494-f386-4b84-910b-40a693faa3a4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.607345 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf7a2494-f386-4b84-910b-40a693faa3a4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.612721 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_cf7a2494-f386-4b84-910b-40a693faa3a4/docker-build/0.log" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.614146 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerID="253e1503fa905d39e4d72c59493326a20d77070c465a640f447a85a9567c18f0" exitCode=1 Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.614285 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cf7a2494-f386-4b84-910b-40a693faa3a4","Type":"ContainerDied","Data":"253e1503fa905d39e4d72c59493326a20d77070c465a640f447a85a9567c18f0"} Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.614369 5099 scope.go:117] "RemoveContainer" containerID="253e1503fa905d39e4d72c59493326a20d77070c465a640f447a85a9567c18f0" Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.617390 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerStarted","Data":"67592f82331e556571f44624a923e0c0430c98ce01cae73e436c36862820d87d"} Jan 21 18:31:43 crc kubenswrapper[5099]: I0121 18:31:43.682937 5099 scope.go:117] "RemoveContainer" containerID="6a60a0247f99b5a21556bcc94655560b88b7cabe130d0a9a90ab210cc4a230da" Jan 21 18:31:43 crc kubenswrapper[5099]: E0121 18:31:43.763713 5099 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.61:53374->38.129.56.61:35733: read tcp 38.129.56.61:53374->38.129.56.61:35733: read: connection reset by peer Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.626922 5099 generic.go:358] "Generic (PLEG): container finished" podID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerID="b8ec9b228499febf310c77c42779d80dfd4baa744c292d562e36907f69ae8743" exitCode=0 Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.627129 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerDied","Data":"b8ec9b228499febf310c77c42779d80dfd4baa744c292d562e36907f69ae8743"} Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.633291 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cf7a2494-f386-4b84-910b-40a693faa3a4","Type":"ContainerDied","Data":"0a806038970f685c653b0913edcc02071804997a3d54f6135c10c7565bfb45c7"} Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.633470 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.694928 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:44 crc kubenswrapper[5099]: I0121 18:31:44.709959 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 18:31:45 crc kubenswrapper[5099]: I0121 18:31:45.645322 5099 generic.go:358] "Generic (PLEG): container finished" podID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerID="941f36bf9d12c16acaff852760cac03b90a18b7e7527cb0a073e1d308e6aa0d7" exitCode=0 Jan 21 18:31:45 crc kubenswrapper[5099]: I0121 18:31:45.645441 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerDied","Data":"941f36bf9d12c16acaff852760cac03b90a18b7e7527cb0a073e1d308e6aa0d7"} Jan 21 18:31:45 crc kubenswrapper[5099]: I0121 18:31:45.747077 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_5a7cdd8f-1476-425d-a189-82a71b306bb2/manage-dockerfile/0.log" Jan 21 18:31:45 crc kubenswrapper[5099]: I0121 18:31:45.925889 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" path="/var/lib/kubelet/pods/cf7a2494-f386-4b84-910b-40a693faa3a4/volumes" Jan 21 18:31:46 crc kubenswrapper[5099]: I0121 18:31:46.669267 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerStarted","Data":"9a7ee0125da2fe804d498a53886892e73d4ad4cc2d092336464b0ad5452e18f9"} Jan 21 18:31:46 crc kubenswrapper[5099]: I0121 18:31:46.700168 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.700147887 podStartE2EDuration="8.700147887s" podCreationTimestamp="2026-01-21 18:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:31:46.695043704 +0000 UTC m=+1064.109006165" watchObservedRunningTime="2026-01-21 18:31:46.700147887 +0000 UTC m=+1064.114110348" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.163223 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483672-jw8wz"] Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.164815 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="manage-dockerfile" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.164836 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="manage-dockerfile" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.164891 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="docker-build" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.164900 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="docker-build" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.165111 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf7a2494-f386-4b84-910b-40a693faa3a4" containerName="docker-build" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.182260 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483672-jw8wz"] Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.182494 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.185634 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.186819 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.190396 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.290166 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69v4\" (UniqueName: \"kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4\") pod \"auto-csr-approver-29483672-jw8wz\" (UID: \"00feba76-7b02-4f96-901a-29608e1a9227\") " pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.391568 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k69v4\" (UniqueName: \"kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4\") pod \"auto-csr-approver-29483672-jw8wz\" (UID: \"00feba76-7b02-4f96-901a-29608e1a9227\") " pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.415909 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69v4\" (UniqueName: \"kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4\") pod \"auto-csr-approver-29483672-jw8wz\" (UID: \"00feba76-7b02-4f96-901a-29608e1a9227\") " pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.501840 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.730721 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483672-jw8wz"] Jan 21 18:32:00 crc kubenswrapper[5099]: W0121 18:32:00.766107 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00feba76_7b02_4f96_901a_29608e1a9227.slice/crio-1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12 WatchSource:0}: Error finding container 1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12: Status 404 returned error can't find the container with id 1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12 Jan 21 18:32:00 crc kubenswrapper[5099]: I0121 18:32:00.791114 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" event={"ID":"00feba76-7b02-4f96-901a-29608e1a9227","Type":"ContainerStarted","Data":"1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12"} Jan 21 18:32:02 crc kubenswrapper[5099]: I0121 18:32:02.806564 5099 generic.go:358] "Generic (PLEG): container finished" podID="00feba76-7b02-4f96-901a-29608e1a9227" containerID="11f41b213fba7cd097af67144e9fe8d9721185bf8815b4ab3a852f67cd956389" exitCode=0 Jan 21 18:32:02 crc kubenswrapper[5099]: I0121 18:32:02.806650 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" event={"ID":"00feba76-7b02-4f96-901a-29608e1a9227","Type":"ContainerDied","Data":"11f41b213fba7cd097af67144e9fe8d9721185bf8815b4ab3a852f67cd956389"} Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.077844 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.259615 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69v4\" (UniqueName: \"kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4\") pod \"00feba76-7b02-4f96-901a-29608e1a9227\" (UID: \"00feba76-7b02-4f96-901a-29608e1a9227\") " Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.270663 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4" (OuterVolumeSpecName: "kube-api-access-k69v4") pod "00feba76-7b02-4f96-901a-29608e1a9227" (UID: "00feba76-7b02-4f96-901a-29608e1a9227"). InnerVolumeSpecName "kube-api-access-k69v4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.361858 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k69v4\" (UniqueName: \"kubernetes.io/projected/00feba76-7b02-4f96-901a-29608e1a9227-kube-api-access-k69v4\") on node \"crc\" DevicePath \"\"" Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.824259 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.824277 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483672-jw8wz" event={"ID":"00feba76-7b02-4f96-901a-29608e1a9227","Type":"ContainerDied","Data":"1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12"} Jan 21 18:32:04 crc kubenswrapper[5099]: I0121 18:32:04.824339 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ecc78eb5aae6a15c41ed90ab056c717ddac5f57deedcb7c375e788810b9cb12" Jan 21 18:32:05 crc kubenswrapper[5099]: I0121 18:32:05.183455 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483666-mctxk"] Jan 21 18:32:05 crc kubenswrapper[5099]: I0121 18:32:05.191726 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483666-mctxk"] Jan 21 18:32:05 crc kubenswrapper[5099]: I0121 18:32:05.922559 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349fa1e0-3431-4ef2-8bf3-77c052a7e479" path="/var/lib/kubelet/pods/349fa1e0-3431-4ef2-8bf3-77c052a7e479/volumes" Jan 21 18:32:16 crc kubenswrapper[5099]: I0121 18:32:16.550078 5099 scope.go:117] "RemoveContainer" containerID="5505ca5afa481ffeacca2977f07078fb4d81fcf3b74a0ea1fc655414cc6a80e3" Jan 21 18:33:04 crc kubenswrapper[5099]: I0121 18:33:04.848871 5099 generic.go:358] "Generic (PLEG): container finished" podID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerID="9a7ee0125da2fe804d498a53886892e73d4ad4cc2d092336464b0ad5452e18f9" exitCode=0 Jan 21 18:33:04 crc kubenswrapper[5099]: I0121 18:33:04.849089 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerDied","Data":"9a7ee0125da2fe804d498a53886892e73d4ad4cc2d092336464b0ad5452e18f9"} Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.142201 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.275581 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.275726 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.275796 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.275828 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.275861 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj78h\" (UniqueName: \"kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277214 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277496 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277533 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277553 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277589 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277620 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277718 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277775 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277800 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.277874 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs\") pod \"5a7cdd8f-1476-425d-a189-82a71b306bb2\" (UID: \"5a7cdd8f-1476-425d-a189-82a71b306bb2\") " Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.278559 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.278841 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.278908 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.278917 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.278927 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5a7cdd8f-1476-425d-a189-82a71b306bb2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.279143 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.279191 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.280169 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.284349 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.284454 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.286250 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h" (OuterVolumeSpecName: "kube-api-access-tj78h") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "kube-api-access-tj78h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381359 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381430 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/5a7cdd8f-1476-425d-a189-82a71b306bb2-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381446 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tj78h\" (UniqueName: \"kubernetes.io/projected/5a7cdd8f-1476-425d-a189-82a71b306bb2-kube-api-access-tj78h\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381457 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381470 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.381482 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.459713 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.482783 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.880054 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5a7cdd8f-1476-425d-a189-82a71b306bb2","Type":"ContainerDied","Data":"67592f82331e556571f44624a923e0c0430c98ce01cae73e436c36862820d87d"} Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.880130 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67592f82331e556571f44624a923e0c0430c98ce01cae73e436c36862820d87d" Jan 21 18:33:06 crc kubenswrapper[5099]: I0121 18:33:06.880080 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 18:33:08 crc kubenswrapper[5099]: I0121 18:33:08.215350 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5a7cdd8f-1476-425d-a189-82a71b306bb2" (UID: "5a7cdd8f-1476-425d-a189-82a71b306bb2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:08 crc kubenswrapper[5099]: I0121 18:33:08.313071 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5a7cdd8f-1476-425d-a189-82a71b306bb2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.167021 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168310 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="manage-dockerfile" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168335 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="manage-dockerfile" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168354 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="docker-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168359 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="docker-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168374 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00feba76-7b02-4f96-901a-29608e1a9227" containerName="oc" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168380 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="00feba76-7b02-4f96-901a-29608e1a9227" containerName="oc" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168404 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="git-clone" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168410 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="git-clone" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168528 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="00feba76-7b02-4f96-901a-29608e1a9227" containerName="oc" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.168540 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a7cdd8f-1476-425d-a189-82a71b306bb2" containerName="docker-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.173926 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.176603 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.176605 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.177107 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.177200 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.197304 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.261750 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.261819 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.261855 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.261905 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.261944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262289 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262364 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj2w\" (UniqueName: \"kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262413 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262440 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262568 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.262588 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364180 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364243 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364273 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj2w\" (UniqueName: \"kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364297 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364434 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364479 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364504 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364523 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364552 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364573 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364906 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.364962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.365029 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.365279 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.365349 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.365719 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.365904 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.366023 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.367088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.374060 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.379021 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.386597 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj2w\" (UniqueName: \"kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w\") pod \"sg-core-1-build\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.500014 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.753949 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:11 crc kubenswrapper[5099]: I0121 18:33:11.930803 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"62068598-f9a6-488b-8d15-551e3cbffeb8","Type":"ContainerStarted","Data":"312900c32c3200dcd420bbc3c46446f983013d96259676e70ef80e95e7fe691d"} Jan 21 18:33:12 crc kubenswrapper[5099]: I0121 18:33:12.940521 5099 generic.go:358] "Generic (PLEG): container finished" podID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerID="47bfed3896126665ea6ebc995958bb361ceacc821eaacff23bc7f519e8e2efdb" exitCode=0 Jan 21 18:33:12 crc kubenswrapper[5099]: I0121 18:33:12.940586 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"62068598-f9a6-488b-8d15-551e3cbffeb8","Type":"ContainerDied","Data":"47bfed3896126665ea6ebc995958bb361ceacc821eaacff23bc7f519e8e2efdb"} Jan 21 18:33:13 crc kubenswrapper[5099]: I0121 18:33:13.951746 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"62068598-f9a6-488b-8d15-551e3cbffeb8","Type":"ContainerStarted","Data":"b117e83c29f61fd1dd1ba9ceef6d2662f87ca89a838ef28324f6309324c541c6"} Jan 21 18:33:13 crc kubenswrapper[5099]: I0121 18:33:13.979923 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.979894717 podStartE2EDuration="2.979894717s" podCreationTimestamp="2026-01-21 18:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:33:13.979667741 +0000 UTC m=+1151.393630212" watchObservedRunningTime="2026-01-21 18:33:13.979894717 +0000 UTC m=+1151.393857178" Jan 21 18:33:21 crc kubenswrapper[5099]: I0121 18:33:21.607295 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:21 crc kubenswrapper[5099]: I0121 18:33:21.608703 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="docker-build" containerID="cri-o://b117e83c29f61fd1dd1ba9ceef6d2662f87ca89a838ef28324f6309324c541c6" gracePeriod=30 Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.031972 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_62068598-f9a6-488b-8d15-551e3cbffeb8/docker-build/0.log" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.032979 5099 generic.go:358] "Generic (PLEG): container finished" podID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerID="b117e83c29f61fd1dd1ba9ceef6d2662f87ca89a838ef28324f6309324c541c6" exitCode=1 Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.033123 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"62068598-f9a6-488b-8d15-551e3cbffeb8","Type":"ContainerDied","Data":"b117e83c29f61fd1dd1ba9ceef6d2662f87ca89a838ef28324f6309324c541c6"} Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.033166 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"62068598-f9a6-488b-8d15-551e3cbffeb8","Type":"ContainerDied","Data":"312900c32c3200dcd420bbc3c46446f983013d96259676e70ef80e95e7fe691d"} Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.033187 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312900c32c3200dcd420bbc3c46446f983013d96259676e70ef80e95e7fe691d" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.046457 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_62068598-f9a6-488b-8d15-551e3cbffeb8/docker-build/0.log" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.046925 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.065154 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.065243 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137559 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137664 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137790 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137829 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137895 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137948 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137947 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.137995 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138125 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj2w\" (UniqueName: \"kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138182 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138210 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138217 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138263 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull\") pod \"62068598-f9a6-488b-8d15-551e3cbffeb8\" (UID: \"62068598-f9a6-488b-8d15-551e3cbffeb8\") " Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138695 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.138722 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62068598-f9a6-488b-8d15-551e3cbffeb8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.139084 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.139561 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.139779 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.140258 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.141632 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.146427 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.146493 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.146540 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w" (OuterVolumeSpecName: "kube-api-access-hcj2w") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "kube-api-access-hcj2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.216898 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240254 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240296 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240313 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240327 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/62068598-f9a6-488b-8d15-551e3cbffeb8-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240339 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240351 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240362 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240374 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62068598-f9a6-488b-8d15-551e3cbffeb8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.240386 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcj2w\" (UniqueName: \"kubernetes.io/projected/62068598-f9a6-488b-8d15-551e3cbffeb8-kube-api-access-hcj2w\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.252813 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "62068598-f9a6-488b-8d15-551e3cbffeb8" (UID: "62068598-f9a6-488b-8d15-551e3cbffeb8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:33:22 crc kubenswrapper[5099]: I0121 18:33:22.342327 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62068598-f9a6-488b-8d15-551e3cbffeb8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.041803 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.080604 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.089675 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.241918 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.242977 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="docker-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.243005 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="docker-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.243035 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="manage-dockerfile" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.243045 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="manage-dockerfile" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.243178 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" containerName="docker-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.256405 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.259184 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.259591 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.259903 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.263387 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.266638 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.357947 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.358228 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.358330 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.358355 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.358911 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359023 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359068 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359089 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359135 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2jt\" (UniqueName: \"kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359175 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359206 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.359251 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.461272 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462252 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462312 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462454 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462488 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462529 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462533 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462629 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2jt\" (UniqueName: \"kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462594 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462664 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.462877 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.463077 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.463133 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.463596 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.464085 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.464137 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.464271 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.465321 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.465496 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.466163 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.473011 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.475089 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.484366 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2jt\" (UniqueName: \"kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt\") pod \"sg-core-2-build\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.573545 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.825001 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 18:33:23 crc kubenswrapper[5099]: I0121 18:33:23.928465 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62068598-f9a6-488b-8d15-551e3cbffeb8" path="/var/lib/kubelet/pods/62068598-f9a6-488b-8d15-551e3cbffeb8/volumes" Jan 21 18:33:24 crc kubenswrapper[5099]: I0121 18:33:24.052189 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerStarted","Data":"5ef4bcc357d4685298dbafd671d62b52ff1c33d7b124f2c26eaa38a6a1b974e6"} Jan 21 18:33:25 crc kubenswrapper[5099]: I0121 18:33:25.063630 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerStarted","Data":"3d6962357b5ea45fa383951ded537f7f927c252a1bbc50174537c944c8120ef0"} Jan 21 18:33:26 crc kubenswrapper[5099]: I0121 18:33:26.073672 5099 generic.go:358] "Generic (PLEG): container finished" podID="69430763-5b99-43d3-9530-99409ac0586a" containerID="3d6962357b5ea45fa383951ded537f7f927c252a1bbc50174537c944c8120ef0" exitCode=0 Jan 21 18:33:26 crc kubenswrapper[5099]: I0121 18:33:26.073825 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerDied","Data":"3d6962357b5ea45fa383951ded537f7f927c252a1bbc50174537c944c8120ef0"} Jan 21 18:33:27 crc kubenswrapper[5099]: I0121 18:33:27.104997 5099 generic.go:358] "Generic (PLEG): container finished" podID="69430763-5b99-43d3-9530-99409ac0586a" containerID="dd822244f5c17db89934a2d3801fc2cb08a8a4872f59f3fe4797744bf94f9f4b" exitCode=0 Jan 21 18:33:27 crc kubenswrapper[5099]: I0121 18:33:27.105119 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerDied","Data":"dd822244f5c17db89934a2d3801fc2cb08a8a4872f59f3fe4797744bf94f9f4b"} Jan 21 18:33:27 crc kubenswrapper[5099]: I0121 18:33:27.140305 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_69430763-5b99-43d3-9530-99409ac0586a/manage-dockerfile/0.log" Jan 21 18:33:28 crc kubenswrapper[5099]: I0121 18:33:28.120721 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerStarted","Data":"bf20091acfca44aa11d74ef45e4f0876369a261b026580525532002ed9a8ca22"} Jan 21 18:33:28 crc kubenswrapper[5099]: I0121 18:33:28.162180 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.162148839 podStartE2EDuration="5.162148839s" podCreationTimestamp="2026-01-21 18:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:33:28.152394805 +0000 UTC m=+1165.566357266" watchObservedRunningTime="2026-01-21 18:33:28.162148839 +0000 UTC m=+1165.576111320" Jan 21 18:33:52 crc kubenswrapper[5099]: I0121 18:33:52.064516 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:33:52 crc kubenswrapper[5099]: I0121 18:33:52.065476 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:34:00 crc kubenswrapper[5099]: I0121 18:34:00.146621 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483674-mc59c"] Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.050144 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483674-mc59c"] Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.051413 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.056669 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.056669 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.056969 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.140544 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xff44\" (UniqueName: \"kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44\") pod \"auto-csr-approver-29483674-mc59c\" (UID: \"3c963dad-e808-40d6-b540-225e829dc1af\") " pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.242920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xff44\" (UniqueName: \"kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44\") pod \"auto-csr-approver-29483674-mc59c\" (UID: \"3c963dad-e808-40d6-b540-225e829dc1af\") " pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.266962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xff44\" (UniqueName: \"kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44\") pod \"auto-csr-approver-29483674-mc59c\" (UID: \"3c963dad-e808-40d6-b540-225e829dc1af\") " pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.385711 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:01 crc kubenswrapper[5099]: I0121 18:34:01.640623 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483674-mc59c"] Jan 21 18:34:02 crc kubenswrapper[5099]: I0121 18:34:02.396628 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483674-mc59c" event={"ID":"3c963dad-e808-40d6-b540-225e829dc1af","Type":"ContainerStarted","Data":"2012284f89cec4f99d356a5782e579620c691e8c720da916bc956078d61e0e83"} Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.385890 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_62068598-f9a6-488b-8d15-551e3cbffeb8/docker-build/0.log" Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.392768 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_62068598-f9a6-488b-8d15-551e3cbffeb8/docker-build/0.log" Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.413269 5099 generic.go:358] "Generic (PLEG): container finished" podID="3c963dad-e808-40d6-b540-225e829dc1af" containerID="e96bfbe549960756682345787e070fe9c280eed8c0e99dd52837af809d26d8df" exitCode=0 Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.413469 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483674-mc59c" event={"ID":"3c963dad-e808-40d6-b540-225e829dc1af","Type":"ContainerDied","Data":"e96bfbe549960756682345787e070fe9c280eed8c0e99dd52837af809d26d8df"} Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.455391 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.455749 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.461907 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:34:04 crc kubenswrapper[5099]: I0121 18:34:04.462128 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:34:05 crc kubenswrapper[5099]: I0121 18:34:05.675524 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:05 crc kubenswrapper[5099]: I0121 18:34:05.827635 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xff44\" (UniqueName: \"kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44\") pod \"3c963dad-e808-40d6-b540-225e829dc1af\" (UID: \"3c963dad-e808-40d6-b540-225e829dc1af\") " Jan 21 18:34:05 crc kubenswrapper[5099]: I0121 18:34:05.836756 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44" (OuterVolumeSpecName: "kube-api-access-xff44") pod "3c963dad-e808-40d6-b540-225e829dc1af" (UID: "3c963dad-e808-40d6-b540-225e829dc1af"). InnerVolumeSpecName "kube-api-access-xff44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:34:05 crc kubenswrapper[5099]: I0121 18:34:05.929192 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xff44\" (UniqueName: \"kubernetes.io/projected/3c963dad-e808-40d6-b540-225e829dc1af-kube-api-access-xff44\") on node \"crc\" DevicePath \"\"" Jan 21 18:34:06 crc kubenswrapper[5099]: I0121 18:34:06.432139 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483674-mc59c" event={"ID":"3c963dad-e808-40d6-b540-225e829dc1af","Type":"ContainerDied","Data":"2012284f89cec4f99d356a5782e579620c691e8c720da916bc956078d61e0e83"} Jan 21 18:34:06 crc kubenswrapper[5099]: I0121 18:34:06.432784 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2012284f89cec4f99d356a5782e579620c691e8c720da916bc956078d61e0e83" Jan 21 18:34:06 crc kubenswrapper[5099]: I0121 18:34:06.432913 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483674-mc59c" Jan 21 18:34:06 crc kubenswrapper[5099]: I0121 18:34:06.761944 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483668-8mxnt"] Jan 21 18:34:06 crc kubenswrapper[5099]: I0121 18:34:06.769437 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483668-8mxnt"] Jan 21 18:34:07 crc kubenswrapper[5099]: I0121 18:34:07.923533 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7270666a-ed0b-4c75-b2ef-38c616af082a" path="/var/lib/kubelet/pods/7270666a-ed0b-4c75-b2ef-38c616af082a/volumes" Jan 21 18:34:16 crc kubenswrapper[5099]: I0121 18:34:16.706289 5099 scope.go:117] "RemoveContainer" containerID="9db4f0aa917799e06e268f75868aab42ce96be234817c3baa1f0ccfacf6a0228" Jan 21 18:34:22 crc kubenswrapper[5099]: I0121 18:34:22.064991 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:34:22 crc kubenswrapper[5099]: I0121 18:34:22.065977 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:34:22 crc kubenswrapper[5099]: I0121 18:34:22.066063 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:34:22 crc kubenswrapper[5099]: I0121 18:34:22.066887 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:34:22 crc kubenswrapper[5099]: I0121 18:34:22.066942 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f" gracePeriod=600 Jan 21 18:34:24 crc kubenswrapper[5099]: I0121 18:34:24.683567 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f" exitCode=0 Jan 21 18:34:24 crc kubenswrapper[5099]: I0121 18:34:24.683648 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f"} Jan 21 18:34:24 crc kubenswrapper[5099]: I0121 18:34:24.684794 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa"} Jan 21 18:34:24 crc kubenswrapper[5099]: I0121 18:34:24.684823 5099 scope.go:117] "RemoveContainer" containerID="47739ad43226ccaa23d66e4f75a21cb2d01702a76a41ce8c63bde01121040b33" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.145863 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483676-9pvxk"] Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.147868 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c963dad-e808-40d6-b540-225e829dc1af" containerName="oc" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.147892 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c963dad-e808-40d6-b540-225e829dc1af" containerName="oc" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.148085 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c963dad-e808-40d6-b540-225e829dc1af" containerName="oc" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.827715 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483676-9pvxk"] Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.827943 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.831688 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.831752 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.831881 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:36:00 crc kubenswrapper[5099]: I0121 18:36:00.930500 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwhfl\" (UniqueName: \"kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl\") pod \"auto-csr-approver-29483676-9pvxk\" (UID: \"d7616d8d-d2f4-463e-a174-b133d0fdbac9\") " pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:01 crc kubenswrapper[5099]: I0121 18:36:01.031789 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwhfl\" (UniqueName: \"kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl\") pod \"auto-csr-approver-29483676-9pvxk\" (UID: \"d7616d8d-d2f4-463e-a174-b133d0fdbac9\") " pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:01 crc kubenswrapper[5099]: I0121 18:36:01.057941 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwhfl\" (UniqueName: \"kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl\") pod \"auto-csr-approver-29483676-9pvxk\" (UID: \"d7616d8d-d2f4-463e-a174-b133d0fdbac9\") " pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:01 crc kubenswrapper[5099]: I0121 18:36:01.148621 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:01 crc kubenswrapper[5099]: I0121 18:36:01.431338 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483676-9pvxk"] Jan 21 18:36:01 crc kubenswrapper[5099]: I0121 18:36:01.668776 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" event={"ID":"d7616d8d-d2f4-463e-a174-b133d0fdbac9","Type":"ContainerStarted","Data":"63fd7a02334d4d414cd3db1d184af93d546ab49d5387dd6adf3ead8c63006a0a"} Jan 21 18:36:03 crc kubenswrapper[5099]: I0121 18:36:03.700103 5099 generic.go:358] "Generic (PLEG): container finished" podID="d7616d8d-d2f4-463e-a174-b133d0fdbac9" containerID="251b96961c77e91054a6f8ad46f7c63a02622fd717ecd390ee019c39b74f42bd" exitCode=0 Jan 21 18:36:03 crc kubenswrapper[5099]: I0121 18:36:03.700324 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" event={"ID":"d7616d8d-d2f4-463e-a174-b133d0fdbac9","Type":"ContainerDied","Data":"251b96961c77e91054a6f8ad46f7c63a02622fd717ecd390ee019c39b74f42bd"} Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.011920 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.097967 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwhfl\" (UniqueName: \"kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl\") pod \"d7616d8d-d2f4-463e-a174-b133d0fdbac9\" (UID: \"d7616d8d-d2f4-463e-a174-b133d0fdbac9\") " Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.522120 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl" (OuterVolumeSpecName: "kube-api-access-mwhfl") pod "d7616d8d-d2f4-463e-a174-b133d0fdbac9" (UID: "d7616d8d-d2f4-463e-a174-b133d0fdbac9"). InnerVolumeSpecName "kube-api-access-mwhfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.530642 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwhfl\" (UniqueName: \"kubernetes.io/projected/d7616d8d-d2f4-463e-a174-b133d0fdbac9-kube-api-access-mwhfl\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.716230 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" event={"ID":"d7616d8d-d2f4-463e-a174-b133d0fdbac9","Type":"ContainerDied","Data":"63fd7a02334d4d414cd3db1d184af93d546ab49d5387dd6adf3ead8c63006a0a"} Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.716292 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63fd7a02334d4d414cd3db1d184af93d546ab49d5387dd6adf3ead8c63006a0a" Jan 21 18:36:05 crc kubenswrapper[5099]: I0121 18:36:05.716257 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483676-9pvxk" Jan 21 18:36:06 crc kubenswrapper[5099]: I0121 18:36:06.089631 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483670-ht5fw"] Jan 21 18:36:06 crc kubenswrapper[5099]: I0121 18:36:06.095021 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483670-ht5fw"] Jan 21 18:36:07 crc kubenswrapper[5099]: I0121 18:36:07.924934 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7199af-7d8a-4536-a3d5-82da6a93ce67" path="/var/lib/kubelet/pods/ae7199af-7d8a-4536-a3d5-82da6a93ce67/volumes" Jan 21 18:36:16 crc kubenswrapper[5099]: I0121 18:36:16.862184 5099 scope.go:117] "RemoveContainer" containerID="015241055a520841093f569cf85743136964fd459b52302e80e0c34feacf5659" Jan 21 18:36:28 crc kubenswrapper[5099]: I0121 18:36:28.616529 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:28 crc kubenswrapper[5099]: I0121 18:36:28.618524 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7616d8d-d2f4-463e-a174-b133d0fdbac9" containerName="oc" Jan 21 18:36:28 crc kubenswrapper[5099]: I0121 18:36:28.618549 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7616d8d-d2f4-463e-a174-b133d0fdbac9" containerName="oc" Jan 21 18:36:28 crc kubenswrapper[5099]: I0121 18:36:28.618700 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7616d8d-d2f4-463e-a174-b133d0fdbac9" containerName="oc" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.411788 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.421650 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.473398 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgrn\" (UniqueName: \"kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.473704 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.473892 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.577434 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgrn\" (UniqueName: \"kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.577874 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.578024 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.579914 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.580190 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.609172 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgrn\" (UniqueName: \"kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn\") pod \"community-operators-fc26l\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:37 crc kubenswrapper[5099]: I0121 18:36:37.747399 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:38 crc kubenswrapper[5099]: I0121 18:36:38.223949 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:38 crc kubenswrapper[5099]: I0121 18:36:38.236061 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:36:38 crc kubenswrapper[5099]: I0121 18:36:38.355967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerStarted","Data":"a6e7eec7ea565e84805b78952c298392d624cecc939a24114b7cbed269da488b"} Jan 21 18:36:40 crc kubenswrapper[5099]: I0121 18:36:40.375049 5099 generic.go:358] "Generic (PLEG): container finished" podID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerID="e8534523504851736e026cccdb27f30e011efe81a58c9eedc7621b8e276a9fd8" exitCode=0 Jan 21 18:36:40 crc kubenswrapper[5099]: I0121 18:36:40.375099 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerDied","Data":"e8534523504851736e026cccdb27f30e011efe81a58c9eedc7621b8e276a9fd8"} Jan 21 18:36:41 crc kubenswrapper[5099]: I0121 18:36:41.386309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerStarted","Data":"4a4b796e39e83e421298fa78e0232f8f6efcd2028ebdc5d31e0e0ad354ccb11f"} Jan 21 18:36:42 crc kubenswrapper[5099]: I0121 18:36:42.400570 5099 generic.go:358] "Generic (PLEG): container finished" podID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerID="4a4b796e39e83e421298fa78e0232f8f6efcd2028ebdc5d31e0e0ad354ccb11f" exitCode=0 Jan 21 18:36:42 crc kubenswrapper[5099]: I0121 18:36:42.403648 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerDied","Data":"4a4b796e39e83e421298fa78e0232f8f6efcd2028ebdc5d31e0e0ad354ccb11f"} Jan 21 18:36:43 crc kubenswrapper[5099]: I0121 18:36:43.424009 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerStarted","Data":"7768e76cec6ea6b122ad9d2b7c9d5579ed457427a8c835709e281510b3a2e1d8"} Jan 21 18:36:47 crc kubenswrapper[5099]: I0121 18:36:47.748555 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:47 crc kubenswrapper[5099]: I0121 18:36:47.749672 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:47 crc kubenswrapper[5099]: I0121 18:36:47.795002 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:47 crc kubenswrapper[5099]: I0121 18:36:47.816185 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fc26l" podStartSLOduration=19.150263225 podStartE2EDuration="19.816160048s" podCreationTimestamp="2026-01-21 18:36:28 +0000 UTC" firstStartedPulling="2026-01-21 18:36:40.37635687 +0000 UTC m=+1357.790319331" lastFinishedPulling="2026-01-21 18:36:41.042253693 +0000 UTC m=+1358.456216154" observedRunningTime="2026-01-21 18:36:43.450646478 +0000 UTC m=+1360.864608939" watchObservedRunningTime="2026-01-21 18:36:47.816160048 +0000 UTC m=+1365.230122509" Jan 21 18:36:48 crc kubenswrapper[5099]: I0121 18:36:48.540284 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:48 crc kubenswrapper[5099]: I0121 18:36:48.593341 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:50 crc kubenswrapper[5099]: I0121 18:36:50.513360 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fc26l" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="registry-server" containerID="cri-o://7768e76cec6ea6b122ad9d2b7c9d5579ed457427a8c835709e281510b3a2e1d8" gracePeriod=2 Jan 21 18:36:52 crc kubenswrapper[5099]: I0121 18:36:52.065142 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:36:52 crc kubenswrapper[5099]: I0121 18:36:52.065996 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:36:52 crc kubenswrapper[5099]: I0121 18:36:52.532971 5099 generic.go:358] "Generic (PLEG): container finished" podID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerID="7768e76cec6ea6b122ad9d2b7c9d5579ed457427a8c835709e281510b3a2e1d8" exitCode=0 Jan 21 18:36:52 crc kubenswrapper[5099]: I0121 18:36:52.533050 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerDied","Data":"7768e76cec6ea6b122ad9d2b7c9d5579ed457427a8c835709e281510b3a2e1d8"} Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.069296 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.145475 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities\") pod \"e37cee57-575e-48ce-9363-4dee3f80ce0f\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.145588 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content\") pod \"e37cee57-575e-48ce-9363-4dee3f80ce0f\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.145631 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hgrn\" (UniqueName: \"kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn\") pod \"e37cee57-575e-48ce-9363-4dee3f80ce0f\" (UID: \"e37cee57-575e-48ce-9363-4dee3f80ce0f\") " Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.147289 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities" (OuterVolumeSpecName: "utilities") pod "e37cee57-575e-48ce-9363-4dee3f80ce0f" (UID: "e37cee57-575e-48ce-9363-4dee3f80ce0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.153491 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn" (OuterVolumeSpecName: "kube-api-access-9hgrn") pod "e37cee57-575e-48ce-9363-4dee3f80ce0f" (UID: "e37cee57-575e-48ce-9363-4dee3f80ce0f"). InnerVolumeSpecName "kube-api-access-9hgrn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.200551 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e37cee57-575e-48ce-9363-4dee3f80ce0f" (UID: "e37cee57-575e-48ce-9363-4dee3f80ce0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.246873 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.246920 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37cee57-575e-48ce-9363-4dee3f80ce0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.246932 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hgrn\" (UniqueName: \"kubernetes.io/projected/e37cee57-575e-48ce-9363-4dee3f80ce0f-kube-api-access-9hgrn\") on node \"crc\" DevicePath \"\"" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.544318 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fc26l" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.544383 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fc26l" event={"ID":"e37cee57-575e-48ce-9363-4dee3f80ce0f","Type":"ContainerDied","Data":"a6e7eec7ea565e84805b78952c298392d624cecc939a24114b7cbed269da488b"} Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.544546 5099 scope.go:117] "RemoveContainer" containerID="7768e76cec6ea6b122ad9d2b7c9d5579ed457427a8c835709e281510b3a2e1d8" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.567663 5099 scope.go:117] "RemoveContainer" containerID="4a4b796e39e83e421298fa78e0232f8f6efcd2028ebdc5d31e0e0ad354ccb11f" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.593989 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.602499 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fc26l"] Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.631345 5099 scope.go:117] "RemoveContainer" containerID="e8534523504851736e026cccdb27f30e011efe81a58c9eedc7621b8e276a9fd8" Jan 21 18:36:53 crc kubenswrapper[5099]: I0121 18:36:53.926488 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" path="/var/lib/kubelet/pods/e37cee57-575e-48ce-9363-4dee3f80ce0f/volumes" Jan 21 18:37:04 crc kubenswrapper[5099]: I0121 18:37:04.660769 5099 generic.go:358] "Generic (PLEG): container finished" podID="69430763-5b99-43d3-9530-99409ac0586a" containerID="bf20091acfca44aa11d74ef45e4f0876369a261b026580525532002ed9a8ca22" exitCode=0 Jan 21 18:37:04 crc kubenswrapper[5099]: I0121 18:37:04.660881 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerDied","Data":"bf20091acfca44aa11d74ef45e4f0876369a261b026580525532002ed9a8ca22"} Jan 21 18:37:05 crc kubenswrapper[5099]: I0121 18:37:05.940673 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.027926 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028012 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028072 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028264 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028296 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028323 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028429 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2jt\" (UniqueName: \"kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028563 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028614 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028665 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028666 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028710 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.028772 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push\") pod \"69430763-5b99-43d3-9530-99409ac0586a\" (UID: \"69430763-5b99-43d3-9530-99409ac0586a\") " Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029155 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029192 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029466 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029485 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029498 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029649 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.029754 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.030292 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.037243 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt" (OuterVolumeSpecName: "kube-api-access-2t2jt") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "kube-api-access-2t2jt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.037596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.038300 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.054849 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134828 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2t2jt\" (UniqueName: \"kubernetes.io/projected/69430763-5b99-43d3-9530-99409ac0586a-kube-api-access-2t2jt\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134876 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69430763-5b99-43d3-9530-99409ac0586a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134885 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134897 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69430763-5b99-43d3-9530-99409ac0586a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134907 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134919 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.134973 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/69430763-5b99-43d3-9530-99409ac0586a-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.440398 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.541942 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.681187 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"69430763-5b99-43d3-9530-99409ac0586a","Type":"ContainerDied","Data":"5ef4bcc357d4685298dbafd671d62b52ff1c33d7b124f2c26eaa38a6a1b974e6"} Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.681241 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef4bcc357d4685298dbafd671d62b52ff1c33d7b124f2c26eaa38a6a1b974e6" Jan 21 18:37:06 crc kubenswrapper[5099]: I0121 18:37:06.681376 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 18:37:08 crc kubenswrapper[5099]: I0121 18:37:08.698395 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "69430763-5b99-43d3-9530-99409ac0586a" (UID: "69430763-5b99-43d3-9530-99409ac0586a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:08 crc kubenswrapper[5099]: I0121 18:37:08.776757 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69430763-5b99-43d3-9530-99409ac0586a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.937781 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942804 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="registry-server" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942860 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="registry-server" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942890 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="manage-dockerfile" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942898 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="manage-dockerfile" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942908 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="extract-content" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942915 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="extract-content" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942939 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="docker-build" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942945 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="docker-build" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942960 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="git-clone" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942967 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="git-clone" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942974 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="extract-utilities" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.942981 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="extract-utilities" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.943117 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="69430763-5b99-43d3-9530-99409ac0586a" containerName="docker-build" Jan 21 18:37:09 crc kubenswrapper[5099]: I0121 18:37:09.943144 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="e37cee57-575e-48ce-9363-4dee3f80ce0f" containerName="registry-server" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.239763 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.240060 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.401798 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n4z\" (UniqueName: \"kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.401866 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.401959 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.503090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92n4z\" (UniqueName: \"kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.503563 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.503705 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.504971 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.505000 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.526136 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n4z\" (UniqueName: \"kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z\") pod \"certified-operators-lc86h\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.568590 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:10 crc kubenswrapper[5099]: I0121 18:37:10.787324 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.720257 5099 generic.go:358] "Generic (PLEG): container finished" podID="9d0d9286-b656-4cd5-b060-b64351a37252" containerID="124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b" exitCode=0 Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.720330 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerDied","Data":"124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b"} Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.720417 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerStarted","Data":"172155c883131d2d5eccfdf8fdaed9deaf88497b10b9c69acd6d11550140632e"} Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.750815 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.768818 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.771055 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.776502 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.776789 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.776841 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.776988 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.929899 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.929957 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.929999 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930021 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930050 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930085 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930147 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930164 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930187 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930210 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930234 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrq58\" (UniqueName: \"kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:11 crc kubenswrapper[5099]: I0121 18:37:11.930326 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032448 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032533 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032593 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032631 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032778 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032803 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032824 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032854 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032883 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrq58\" (UniqueName: \"kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032917 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032945 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.032988 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.033081 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.033540 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.033586 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.033977 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.033970 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.034142 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.034524 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.034572 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.034941 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.042626 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.044140 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.051147 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrq58\" (UniqueName: \"kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58\") pod \"sg-bridge-1-build\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.092330 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.322271 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.732960 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerStarted","Data":"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22"} Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.736365 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerStarted","Data":"e7d133ad243e21c2a636d6a68619a1751abc993789930bb177908d8194b73c98"} Jan 21 18:37:12 crc kubenswrapper[5099]: I0121 18:37:12.736514 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerStarted","Data":"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665"} Jan 21 18:37:13 crc kubenswrapper[5099]: I0121 18:37:13.745825 5099 generic.go:358] "Generic (PLEG): container finished" podID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerID="92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22" exitCode=0 Jan 21 18:37:13 crc kubenswrapper[5099]: I0121 18:37:13.745967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerDied","Data":"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22"} Jan 21 18:37:13 crc kubenswrapper[5099]: I0121 18:37:13.749063 5099 generic.go:358] "Generic (PLEG): container finished" podID="9d0d9286-b656-4cd5-b060-b64351a37252" containerID="7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665" exitCode=0 Jan 21 18:37:13 crc kubenswrapper[5099]: I0121 18:37:13.749162 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerDied","Data":"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665"} Jan 21 18:37:14 crc kubenswrapper[5099]: I0121 18:37:14.759013 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerStarted","Data":"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08"} Jan 21 18:37:14 crc kubenswrapper[5099]: I0121 18:37:14.761796 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerStarted","Data":"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3"} Jan 21 18:37:14 crc kubenswrapper[5099]: I0121 18:37:14.813340 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.813317361 podStartE2EDuration="3.813317361s" podCreationTimestamp="2026-01-21 18:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:37:14.801538805 +0000 UTC m=+1392.215501266" watchObservedRunningTime="2026-01-21 18:37:14.813317361 +0000 UTC m=+1392.227279822" Jan 21 18:37:14 crc kubenswrapper[5099]: I0121 18:37:14.831631 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lc86h" podStartSLOduration=5.201510864 podStartE2EDuration="5.831608276s" podCreationTimestamp="2026-01-21 18:37:09 +0000 UTC" firstStartedPulling="2026-01-21 18:37:11.721441185 +0000 UTC m=+1389.135403646" lastFinishedPulling="2026-01-21 18:37:12.351538597 +0000 UTC m=+1389.765501058" observedRunningTime="2026-01-21 18:37:14.825960509 +0000 UTC m=+1392.239922970" watchObservedRunningTime="2026-01-21 18:37:14.831608276 +0000 UTC m=+1392.245570737" Jan 21 18:37:20 crc kubenswrapper[5099]: I0121 18:37:20.569579 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:20 crc kubenswrapper[5099]: I0121 18:37:20.570450 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:20 crc kubenswrapper[5099]: I0121 18:37:20.610477 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:20 crc kubenswrapper[5099]: I0121 18:37:20.853718 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:20 crc kubenswrapper[5099]: I0121 18:37:20.911111 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.065163 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.065261 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.303165 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.304233 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="docker-build" containerID="cri-o://2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08" gracePeriod=30 Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.759284 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_d20b0f46-dfcd-4094-b6bf-d61fc3130637/docker-build/0.log" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.760787 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813526 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrq58\" (UniqueName: \"kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813792 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813875 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813951 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813976 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.813998 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814087 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814086 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814123 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814163 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814173 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814224 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.814408 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push\") pod \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\" (UID: \"d20b0f46-dfcd-4094-b6bf-d61fc3130637\") " Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.815389 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.815595 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.815622 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.815801 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.816067 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.816151 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.816690 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.822316 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.822973 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.822990 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58" (OuterVolumeSpecName: "kube-api-access-lrq58") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "kube-api-access-lrq58". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.826203 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_d20b0f46-dfcd-4094-b6bf-d61fc3130637/docker-build/0.log" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.826934 5099 generic.go:358] "Generic (PLEG): container finished" podID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerID="2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08" exitCode=1 Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.826987 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerDied","Data":"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08"} Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.827053 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d20b0f46-dfcd-4094-b6bf-d61fc3130637","Type":"ContainerDied","Data":"e7d133ad243e21c2a636d6a68619a1751abc993789930bb177908d8194b73c98"} Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.827080 5099 scope.go:117] "RemoveContainer" containerID="2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.827286 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lc86h" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="registry-server" containerID="cri-o://3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3" gracePeriod=2 Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.827425 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.899265 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916793 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916845 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916858 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916871 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916884 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916897 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916909 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/d20b0f46-dfcd-4094-b6bf-d61fc3130637-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916921 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d20b0f46-dfcd-4094-b6bf-d61fc3130637-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.916997 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lrq58\" (UniqueName: \"kubernetes.io/projected/d20b0f46-dfcd-4094-b6bf-d61fc3130637-kube-api-access-lrq58\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.945942 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d20b0f46-dfcd-4094-b6bf-d61fc3130637" (UID: "d20b0f46-dfcd-4094-b6bf-d61fc3130637"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:22 crc kubenswrapper[5099]: I0121 18:37:22.960852 5099 scope.go:117] "RemoveContainer" containerID="92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.022820 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d20b0f46-dfcd-4094-b6bf-d61fc3130637-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.044687 5099 scope.go:117] "RemoveContainer" containerID="2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08" Jan 21 18:37:23 crc kubenswrapper[5099]: E0121 18:37:23.045329 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08\": container with ID starting with 2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08 not found: ID does not exist" containerID="2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.045409 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08"} err="failed to get container status \"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08\": rpc error: code = NotFound desc = could not find container \"2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08\": container with ID starting with 2a44921b5c679b88afa0810272952278e5b49433cbf635edf753aa1c08a1ec08 not found: ID does not exist" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.045455 5099 scope.go:117] "RemoveContainer" containerID="92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22" Jan 21 18:37:23 crc kubenswrapper[5099]: E0121 18:37:23.045951 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22\": container with ID starting with 92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22 not found: ID does not exist" containerID="92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.045999 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22"} err="failed to get container status \"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22\": rpc error: code = NotFound desc = could not find container \"92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22\": container with ID starting with 92a01da8a10b4b85aecf812b2e2f999bd1f3c6338c43a9df2d9665df2dbd6e22 not found: ID does not exist" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.180654 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.196106 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.210985 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.326952 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content\") pod \"9d0d9286-b656-4cd5-b060-b64351a37252\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.327161 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92n4z\" (UniqueName: \"kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z\") pod \"9d0d9286-b656-4cd5-b060-b64351a37252\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.327311 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities\") pod \"9d0d9286-b656-4cd5-b060-b64351a37252\" (UID: \"9d0d9286-b656-4cd5-b060-b64351a37252\") " Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.328570 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities" (OuterVolumeSpecName: "utilities") pod "9d0d9286-b656-4cd5-b060-b64351a37252" (UID: "9d0d9286-b656-4cd5-b060-b64351a37252"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.328961 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.333418 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z" (OuterVolumeSpecName: "kube-api-access-92n4z") pod "9d0d9286-b656-4cd5-b060-b64351a37252" (UID: "9d0d9286-b656-4cd5-b060-b64351a37252"). InnerVolumeSpecName "kube-api-access-92n4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.355492 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d0d9286-b656-4cd5-b060-b64351a37252" (UID: "9d0d9286-b656-4cd5-b060-b64351a37252"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.430338 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-92n4z\" (UniqueName: \"kubernetes.io/projected/9d0d9286-b656-4cd5-b060-b64351a37252-kube-api-access-92n4z\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.430392 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d9286-b656-4cd5-b060-b64351a37252-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.846170 5099 generic.go:358] "Generic (PLEG): container finished" podID="9d0d9286-b656-4cd5-b060-b64351a37252" containerID="3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3" exitCode=0 Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.846311 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerDied","Data":"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3"} Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.846373 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lc86h" event={"ID":"9d0d9286-b656-4cd5-b060-b64351a37252","Type":"ContainerDied","Data":"172155c883131d2d5eccfdf8fdaed9deaf88497b10b9c69acd6d11550140632e"} Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.846401 5099 scope.go:117] "RemoveContainer" containerID="3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.846468 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lc86h" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.876766 5099 scope.go:117] "RemoveContainer" containerID="7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.884545 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.891672 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lc86h"] Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.909172 5099 scope.go:117] "RemoveContainer" containerID="124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.928164 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" path="/var/lib/kubelet/pods/9d0d9286-b656-4cd5-b060-b64351a37252/volumes" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.929942 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" path="/var/lib/kubelet/pods/d20b0f46-dfcd-4094-b6bf-d61fc3130637/volumes" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.936103 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937333 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="extract-utilities" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937360 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="extract-utilities" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937375 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="extract-content" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937388 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="extract-content" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937401 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="manage-dockerfile" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937407 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="manage-dockerfile" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937416 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="docker-build" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937422 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="docker-build" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937442 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="registry-server" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937449 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="registry-server" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937566 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d20b0f46-dfcd-4094-b6bf-d61fc3130637" containerName="docker-build" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.937582 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d0d9286-b656-4cd5-b060-b64351a37252" containerName="registry-server" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.962125 5099 scope.go:117] "RemoveContainer" containerID="3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3" Jan 21 18:37:23 crc kubenswrapper[5099]: E0121 18:37:23.970517 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3\": container with ID starting with 3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3 not found: ID does not exist" containerID="3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.970591 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3"} err="failed to get container status \"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3\": rpc error: code = NotFound desc = could not find container \"3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3\": container with ID starting with 3a7d81506421b9fe452b6e05d9fa650342d5ab789f23ce06a1af571d1f1bfff3 not found: ID does not exist" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.970628 5099 scope.go:117] "RemoveContainer" containerID="7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665" Jan 21 18:37:23 crc kubenswrapper[5099]: E0121 18:37:23.971422 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665\": container with ID starting with 7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665 not found: ID does not exist" containerID="7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.971457 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665"} err="failed to get container status \"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665\": rpc error: code = NotFound desc = could not find container \"7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665\": container with ID starting with 7d0318fe91d9a469c3fb35cfec92e565892482de13a456e2d080752179864665 not found: ID does not exist" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.971473 5099 scope.go:117] "RemoveContainer" containerID="124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b" Jan 21 18:37:23 crc kubenswrapper[5099]: E0121 18:37:23.972077 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b\": container with ID starting with 124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b not found: ID does not exist" containerID="124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.972161 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b"} err="failed to get container status \"124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b\": rpc error: code = NotFound desc = could not find container \"124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b\": container with ID starting with 124cd450127fa48c84ddae0dc8c259acb2f75bf3901f4c839d66e099f8d94a0b not found: ID does not exist" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.974036 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.978270 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.979530 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.979932 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.986536 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:37:23 crc kubenswrapper[5099]: I0121 18:37:23.990302 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039176 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039243 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cprl\" (UniqueName: \"kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039277 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039320 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039704 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039855 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039899 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.039999 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.040043 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.040173 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.040330 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.040409 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.141957 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142022 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cprl\" (UniqueName: \"kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142062 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142127 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142149 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.142576 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143131 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143325 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143380 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143717 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143798 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.143967 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144031 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144162 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144310 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144467 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144468 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.144853 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.148687 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.148969 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.164840 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cprl\" (UniqueName: \"kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl\") pod \"sg-bridge-2-build\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.295866 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.528856 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 18:37:24 crc kubenswrapper[5099]: I0121 18:37:24.858396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerStarted","Data":"99bc6f54e5f823a1ecf790c0023591d8f50350a9812c9ace841e1adfd08f7884"} Jan 21 18:37:25 crc kubenswrapper[5099]: I0121 18:37:25.866041 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerStarted","Data":"8a390d02fa4ac8a32d7c755d1b5b57170537aa38af7124023d3f798e7778ad7d"} Jan 21 18:37:26 crc kubenswrapper[5099]: I0121 18:37:26.874816 5099 generic.go:358] "Generic (PLEG): container finished" podID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerID="8a390d02fa4ac8a32d7c755d1b5b57170537aa38af7124023d3f798e7778ad7d" exitCode=0 Jan 21 18:37:26 crc kubenswrapper[5099]: I0121 18:37:26.874986 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerDied","Data":"8a390d02fa4ac8a32d7c755d1b5b57170537aa38af7124023d3f798e7778ad7d"} Jan 21 18:37:27 crc kubenswrapper[5099]: I0121 18:37:27.884384 5099 generic.go:358] "Generic (PLEG): container finished" podID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerID="319d6ae08f34d9c40fd68a8276a11b6efc812e4765fd0e2aa198dde6ca0139db" exitCode=0 Jan 21 18:37:27 crc kubenswrapper[5099]: I0121 18:37:27.884491 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerDied","Data":"319d6ae08f34d9c40fd68a8276a11b6efc812e4765fd0e2aa198dde6ca0139db"} Jan 21 18:37:27 crc kubenswrapper[5099]: I0121 18:37:27.919867 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_8fd04092-0f4e-46c1-a1b0-d9c839d6edbd/manage-dockerfile/0.log" Jan 21 18:37:28 crc kubenswrapper[5099]: I0121 18:37:28.897241 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerStarted","Data":"8176063c7b7b1558b247a8871defaf4348731af7140f4afe078b204dca4dcc28"} Jan 21 18:37:28 crc kubenswrapper[5099]: I0121 18:37:28.938601 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.938579163 podStartE2EDuration="5.938579163s" podCreationTimestamp="2026-01-21 18:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:37:28.935008946 +0000 UTC m=+1406.348971407" watchObservedRunningTime="2026-01-21 18:37:28.938579163 +0000 UTC m=+1406.352541624" Jan 21 18:37:52 crc kubenswrapper[5099]: I0121 18:37:52.065386 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:37:52 crc kubenswrapper[5099]: I0121 18:37:52.066364 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:37:52 crc kubenswrapper[5099]: I0121 18:37:52.066436 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:37:52 crc kubenswrapper[5099]: I0121 18:37:52.067455 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:37:52 crc kubenswrapper[5099]: I0121 18:37:52.067518 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa" gracePeriod=600 Jan 21 18:37:53 crc kubenswrapper[5099]: I0121 18:37:53.095007 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa" exitCode=0 Jan 21 18:37:53 crc kubenswrapper[5099]: I0121 18:37:53.095216 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa"} Jan 21 18:37:53 crc kubenswrapper[5099]: I0121 18:37:53.095720 5099 scope.go:117] "RemoveContainer" containerID="cf42f9592aaf93662bf63df43e028bef59eb8696172829a214d5c769d98dba4f" Jan 21 18:37:54 crc kubenswrapper[5099]: I0121 18:37:54.105951 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da"} Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.150105 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483678-2b5zf"] Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.156292 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.161575 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.161911 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.164776 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.167183 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483678-2b5zf"] Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.213969 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7px7w\" (UniqueName: \"kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w\") pod \"auto-csr-approver-29483678-2b5zf\" (UID: \"9e537d8d-c124-46c8-a883-5a57e785095f\") " pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.315470 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7px7w\" (UniqueName: \"kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w\") pod \"auto-csr-approver-29483678-2b5zf\" (UID: \"9e537d8d-c124-46c8-a883-5a57e785095f\") " pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.341341 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7px7w\" (UniqueName: \"kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w\") pod \"auto-csr-approver-29483678-2b5zf\" (UID: \"9e537d8d-c124-46c8-a883-5a57e785095f\") " pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.483463 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:00 crc kubenswrapper[5099]: I0121 18:38:00.708537 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483678-2b5zf"] Jan 21 18:38:01 crc kubenswrapper[5099]: I0121 18:38:01.161302 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" event={"ID":"9e537d8d-c124-46c8-a883-5a57e785095f","Type":"ContainerStarted","Data":"2cb7246343cf696d3eb077d3e7a7c16689bc68e942c19e0a3a4f0a63389f077d"} Jan 21 18:38:04 crc kubenswrapper[5099]: I0121 18:38:04.188299 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" event={"ID":"9e537d8d-c124-46c8-a883-5a57e785095f","Type":"ContainerStarted","Data":"e5bec5d558bdce8166ad50eb7e3d4a32b802dd29d871e6d34a04b06540f52c88"} Jan 21 18:38:04 crc kubenswrapper[5099]: I0121 18:38:04.206572 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" podStartSLOduration=1.308974608 podStartE2EDuration="4.206550852s" podCreationTimestamp="2026-01-21 18:38:00 +0000 UTC" firstStartedPulling="2026-01-21 18:38:00.721665479 +0000 UTC m=+1438.135627940" lastFinishedPulling="2026-01-21 18:38:03.619241723 +0000 UTC m=+1441.033204184" observedRunningTime="2026-01-21 18:38:04.20236047 +0000 UTC m=+1441.616322931" watchObservedRunningTime="2026-01-21 18:38:04.206550852 +0000 UTC m=+1441.620513313" Jan 21 18:38:05 crc kubenswrapper[5099]: I0121 18:38:05.199529 5099 generic.go:358] "Generic (PLEG): container finished" podID="9e537d8d-c124-46c8-a883-5a57e785095f" containerID="e5bec5d558bdce8166ad50eb7e3d4a32b802dd29d871e6d34a04b06540f52c88" exitCode=0 Jan 21 18:38:05 crc kubenswrapper[5099]: I0121 18:38:05.199648 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" event={"ID":"9e537d8d-c124-46c8-a883-5a57e785095f","Type":"ContainerDied","Data":"e5bec5d558bdce8166ad50eb7e3d4a32b802dd29d871e6d34a04b06540f52c88"} Jan 21 18:38:06 crc kubenswrapper[5099]: I0121 18:38:06.448357 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:06 crc kubenswrapper[5099]: I0121 18:38:06.539976 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7px7w\" (UniqueName: \"kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w\") pod \"9e537d8d-c124-46c8-a883-5a57e785095f\" (UID: \"9e537d8d-c124-46c8-a883-5a57e785095f\") " Jan 21 18:38:06 crc kubenswrapper[5099]: I0121 18:38:06.552046 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w" (OuterVolumeSpecName: "kube-api-access-7px7w") pod "9e537d8d-c124-46c8-a883-5a57e785095f" (UID: "9e537d8d-c124-46c8-a883-5a57e785095f"). InnerVolumeSpecName "kube-api-access-7px7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:38:06 crc kubenswrapper[5099]: I0121 18:38:06.642580 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7px7w\" (UniqueName: \"kubernetes.io/projected/9e537d8d-c124-46c8-a883-5a57e785095f-kube-api-access-7px7w\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.009935 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483672-jw8wz"] Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.014459 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483672-jw8wz"] Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.227801 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.227796 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483678-2b5zf" event={"ID":"9e537d8d-c124-46c8-a883-5a57e785095f","Type":"ContainerDied","Data":"2cb7246343cf696d3eb077d3e7a7c16689bc68e942c19e0a3a4f0a63389f077d"} Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.227974 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb7246343cf696d3eb077d3e7a7c16689bc68e942c19e0a3a4f0a63389f077d" Jan 21 18:38:07 crc kubenswrapper[5099]: I0121 18:38:07.923991 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00feba76-7b02-4f96-901a-29608e1a9227" path="/var/lib/kubelet/pods/00feba76-7b02-4f96-901a-29608e1a9227/volumes" Jan 21 18:38:17 crc kubenswrapper[5099]: I0121 18:38:17.320492 5099 scope.go:117] "RemoveContainer" containerID="11f41b213fba7cd097af67144e9fe8d9721185bf8815b4ab3a852f67cd956389" Jan 21 18:38:18 crc kubenswrapper[5099]: I0121 18:38:18.322441 5099 generic.go:358] "Generic (PLEG): container finished" podID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerID="8176063c7b7b1558b247a8871defaf4348731af7140f4afe078b204dca4dcc28" exitCode=0 Jan 21 18:38:18 crc kubenswrapper[5099]: I0121 18:38:18.322531 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerDied","Data":"8176063c7b7b1558b247a8871defaf4348731af7140f4afe078b204dca4dcc28"} Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.588938 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609238 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cprl\" (UniqueName: \"kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609285 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609369 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609458 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609519 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609541 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.609565 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.610207 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.610247 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.610284 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.610325 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611193 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611229 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611284 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611321 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611408 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir\") pod \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\" (UID: \"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd\") " Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.611481 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.612043 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.612230 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.612239 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.612248 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.612256 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.617076 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl" (OuterVolumeSpecName: "kube-api-access-5cprl") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "kube-api-access-5cprl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.610621 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.620332 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.622798 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.629709 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.713747 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.713799 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.713818 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cprl\" (UniqueName: \"kubernetes.io/projected/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-kube-api-access-5cprl\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.713831 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.713839 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.736281 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:19 crc kubenswrapper[5099]: I0121 18:38:19.815019 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:20 crc kubenswrapper[5099]: I0121 18:38:20.310581 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" (UID: "8fd04092-0f4e-46c1-a1b0-d9c839d6edbd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:20 crc kubenswrapper[5099]: I0121 18:38:20.322638 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8fd04092-0f4e-46c1-a1b0-d9c839d6edbd-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:20 crc kubenswrapper[5099]: I0121 18:38:20.340666 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"8fd04092-0f4e-46c1-a1b0-d9c839d6edbd","Type":"ContainerDied","Data":"99bc6f54e5f823a1ecf790c0023591d8f50350a9812c9ace841e1adfd08f7884"} Jan 21 18:38:20 crc kubenswrapper[5099]: I0121 18:38:20.340717 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 18:38:20 crc kubenswrapper[5099]: I0121 18:38:20.340726 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99bc6f54e5f823a1ecf790c0023591d8f50350a9812c9ace841e1adfd08f7884" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.633904 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635321 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="manage-dockerfile" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635342 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="manage-dockerfile" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635361 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e537d8d-c124-46c8-a883-5a57e785095f" containerName="oc" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635367 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e537d8d-c124-46c8-a883-5a57e785095f" containerName="oc" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635395 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="git-clone" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635400 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="git-clone" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635411 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="docker-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635416 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="docker-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635543 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="8fd04092-0f4e-46c1-a1b0-d9c839d6edbd" containerName="docker-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.635562 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e537d8d-c124-46c8-a883-5a57e785095f" containerName="oc" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.724447 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.724667 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.727595 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.728031 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.728365 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.728637 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.790848 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791011 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791102 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791139 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgjz\" (UniqueName: \"kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791185 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791376 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791523 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791615 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791711 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791793 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791880 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.791918 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.894373 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.894642 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.894775 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llgjz\" (UniqueName: \"kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.894859 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.894946 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895057 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895136 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895247 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895338 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895464 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895510 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895522 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895561 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895608 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895661 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.895952 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.896510 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.896589 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.896884 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.897084 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.901784 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.903132 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:24 crc kubenswrapper[5099]: I0121 18:38:24.913297 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgjz\" (UniqueName: \"kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:25 crc kubenswrapper[5099]: I0121 18:38:25.044143 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:25 crc kubenswrapper[5099]: I0121 18:38:25.267776 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:25 crc kubenswrapper[5099]: I0121 18:38:25.410934 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"52ad31f4-9a20-4c9c-8076-d7bc2da8717e","Type":"ContainerStarted","Data":"d5d781004808a02c5c030a42e313baf5b27a1a6992d8b38f5ed79d6f6d52efb0"} Jan 21 18:38:26 crc kubenswrapper[5099]: I0121 18:38:26.428391 5099 generic.go:358] "Generic (PLEG): container finished" podID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerID="9097e8fd0c7b29d58f2d693fc9eb8be86659fd6a3b5b66b9fdc24f627700a992" exitCode=0 Jan 21 18:38:26 crc kubenswrapper[5099]: I0121 18:38:26.428576 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"52ad31f4-9a20-4c9c-8076-d7bc2da8717e","Type":"ContainerDied","Data":"9097e8fd0c7b29d58f2d693fc9eb8be86659fd6a3b5b66b9fdc24f627700a992"} Jan 21 18:38:27 crc kubenswrapper[5099]: I0121 18:38:27.448828 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"52ad31f4-9a20-4c9c-8076-d7bc2da8717e","Type":"ContainerStarted","Data":"339ca63e559a3ea2e3a89792bfd14d83dd015fda4f02ce1c7aff9782eacabc09"} Jan 21 18:38:27 crc kubenswrapper[5099]: I0121 18:38:27.480583 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.480563491 podStartE2EDuration="3.480563491s" podCreationTimestamp="2026-01-21 18:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:38:27.477507626 +0000 UTC m=+1464.891470087" watchObservedRunningTime="2026-01-21 18:38:27.480563491 +0000 UTC m=+1464.894525962" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.238151 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.239092 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="docker-build" containerID="cri-o://339ca63e559a3ea2e3a89792bfd14d83dd015fda4f02ce1c7aff9782eacabc09" gracePeriod=30 Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.533034 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_52ad31f4-9a20-4c9c-8076-d7bc2da8717e/docker-build/0.log" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.533550 5099 generic.go:358] "Generic (PLEG): container finished" podID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerID="339ca63e559a3ea2e3a89792bfd14d83dd015fda4f02ce1c7aff9782eacabc09" exitCode=1 Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.534027 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"52ad31f4-9a20-4c9c-8076-d7bc2da8717e","Type":"ContainerDied","Data":"339ca63e559a3ea2e3a89792bfd14d83dd015fda4f02ce1c7aff9782eacabc09"} Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.687980 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_52ad31f4-9a20-4c9c-8076-d7bc2da8717e/docker-build/0.log" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.689220 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.813946 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814285 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814381 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814533 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814588 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgjz\" (UniqueName: \"kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814645 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814696 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.814770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.815048 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.815082 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.815239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.815312 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles\") pod \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\" (UID: \"52ad31f4-9a20-4c9c-8076-d7bc2da8717e\") " Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.817329 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.817823 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.822570 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.822900 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.822914 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.822985 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.823716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.824492 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.826458 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.826925 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz" (OuterVolumeSpecName: "kube-api-access-llgjz") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "kube-api-access-llgjz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.898966 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916801 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916881 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916895 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916907 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916916 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916924 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-llgjz\" (UniqueName: \"kubernetes.io/projected/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-kube-api-access-llgjz\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916954 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916968 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916980 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.916991 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:35 crc kubenswrapper[5099]: I0121 18:38:35.917002 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.013795 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "52ad31f4-9a20-4c9c-8076-d7bc2da8717e" (UID: "52ad31f4-9a20-4c9c-8076-d7bc2da8717e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.018362 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/52ad31f4-9a20-4c9c-8076-d7bc2da8717e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.542964 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_52ad31f4-9a20-4c9c-8076-d7bc2da8717e/docker-build/0.log" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.543465 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"52ad31f4-9a20-4c9c-8076-d7bc2da8717e","Type":"ContainerDied","Data":"d5d781004808a02c5c030a42e313baf5b27a1a6992d8b38f5ed79d6f6d52efb0"} Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.543532 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.543553 5099 scope.go:117] "RemoveContainer" containerID="339ca63e559a3ea2e3a89792bfd14d83dd015fda4f02ce1c7aff9782eacabc09" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.597108 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.598553 5099 scope.go:117] "RemoveContainer" containerID="9097e8fd0c7b29d58f2d693fc9eb8be86659fd6a3b5b66b9fdc24f627700a992" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.607069 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.891398 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.892317 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="manage-dockerfile" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.892347 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="manage-dockerfile" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.892375 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="docker-build" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.892386 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="docker-build" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.892537 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" containerName="docker-build" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.933633 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.933954 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.937235 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-j8qh6\"" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.937265 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.937235 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 21 18:38:36 crc kubenswrapper[5099]: I0121 18:38:36.938558 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.039910 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.039966 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.040000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.040024 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.040062 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.040151 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.040180 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.043316 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.043514 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.145447 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.145938 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.146118 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.146228 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.146183 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.146899 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147097 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147150 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147177 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147204 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147242 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kr5m\" (UniqueName: \"kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147302 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147352 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.147901 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.148100 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.148395 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.148375 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.148979 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.152950 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.153342 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.249629 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.250184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.250447 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.250056 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.250423 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kr5m\" (UniqueName: \"kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.280910 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kr5m\" (UniqueName: \"kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.344302 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.592908 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 18:38:37 crc kubenswrapper[5099]: I0121 18:38:37.922781 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ad31f4-9a20-4c9c-8076-d7bc2da8717e" path="/var/lib/kubelet/pods/52ad31f4-9a20-4c9c-8076-d7bc2da8717e/volumes" Jan 21 18:38:38 crc kubenswrapper[5099]: I0121 18:38:38.565203 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerStarted","Data":"1b312220bc3365c11e82f84be159ae056b9b8d7813e7be829f50ee0de90f220e"} Jan 21 18:38:38 crc kubenswrapper[5099]: I0121 18:38:38.565282 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerStarted","Data":"f21ec7dc6ab10aeea1357f10ec0e0c10411804ac6112e8da83dacb245ed3ef30"} Jan 21 18:38:39 crc kubenswrapper[5099]: I0121 18:38:39.578113 5099 generic.go:358] "Generic (PLEG): container finished" podID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerID="1b312220bc3365c11e82f84be159ae056b9b8d7813e7be829f50ee0de90f220e" exitCode=0 Jan 21 18:38:39 crc kubenswrapper[5099]: I0121 18:38:39.578369 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerDied","Data":"1b312220bc3365c11e82f84be159ae056b9b8d7813e7be829f50ee0de90f220e"} Jan 21 18:38:40 crc kubenswrapper[5099]: I0121 18:38:40.610772 5099 generic.go:358] "Generic (PLEG): container finished" podID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerID="53647f7942ae240ba8c7a217fb02844fd0666569e9f67360f2e39bcc521d2b5b" exitCode=0 Jan 21 18:38:40 crc kubenswrapper[5099]: I0121 18:38:40.610919 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerDied","Data":"53647f7942ae240ba8c7a217fb02844fd0666569e9f67360f2e39bcc521d2b5b"} Jan 21 18:38:40 crc kubenswrapper[5099]: I0121 18:38:40.642551 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0c18b2f2-9374-4ce8-9cf2-f87d073342ce/manage-dockerfile/0.log" Jan 21 18:38:41 crc kubenswrapper[5099]: I0121 18:38:41.624251 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerStarted","Data":"6532959074cc3179b3b940468d7564c8d8d7ac13497ba78f58035eb1b0598c05"} Jan 21 18:38:41 crc kubenswrapper[5099]: I0121 18:38:41.658964 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.658944403 podStartE2EDuration="5.658944403s" podCreationTimestamp="2026-01-21 18:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:38:41.655404227 +0000 UTC m=+1479.069366708" watchObservedRunningTime="2026-01-21 18:38:41.658944403 +0000 UTC m=+1479.072906864" Jan 21 18:39:04 crc kubenswrapper[5099]: I0121 18:39:04.570061 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:39:04 crc kubenswrapper[5099]: I0121 18:39:04.573170 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:39:04 crc kubenswrapper[5099]: I0121 18:39:04.581654 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:39:04 crc kubenswrapper[5099]: I0121 18:39:04.583313 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:39:17 crc kubenswrapper[5099]: I0121 18:39:17.477974 5099 scope.go:117] "RemoveContainer" containerID="b117e83c29f61fd1dd1ba9ceef6d2662f87ca89a838ef28324f6309324c541c6" Jan 21 18:39:17 crc kubenswrapper[5099]: I0121 18:39:17.509776 5099 scope.go:117] "RemoveContainer" containerID="47bfed3896126665ea6ebc995958bb361ceacc821eaacff23bc7f519e8e2efdb" Jan 21 18:39:33 crc kubenswrapper[5099]: I0121 18:39:33.072858 5099 generic.go:358] "Generic (PLEG): container finished" podID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerID="6532959074cc3179b3b940468d7564c8d8d7ac13497ba78f58035eb1b0598c05" exitCode=0 Jan 21 18:39:33 crc kubenswrapper[5099]: I0121 18:39:33.072949 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerDied","Data":"6532959074cc3179b3b940468d7564c8d8d7ac13497ba78f58035eb1b0598c05"} Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.346926 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.481660 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.481674 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.481974 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482215 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482287 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482355 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482434 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482497 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kr5m\" (UniqueName: \"kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482554 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.482576 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.483880 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.483950 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.483984 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir\") pod \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\" (UID: \"0c18b2f2-9374-4ce8-9cf2-f87d073342ce\") " Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.484574 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.484670 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.484713 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.484753 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.483975 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.487554 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.487982 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.496483 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-push") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "builder-dockercfg-j8qh6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.496679 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull" (OuterVolumeSpecName: "builder-dockercfg-j8qh6-pull") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "builder-dockercfg-j8qh6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.496914 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m" (OuterVolumeSpecName: "kube-api-access-7kr5m") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "kube-api-access-7kr5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.586634 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.586707 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.586752 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.586777 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-push\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-push\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:34 crc kubenswrapper[5099]: I0121 18:39:34.586803 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:34.586827 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:34.586847 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-j8qh6-pull\" (UniqueName: \"kubernetes.io/secret/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-builder-dockercfg-j8qh6-pull\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:34.586870 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:34.586887 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7kr5m\" (UniqueName: \"kubernetes.io/projected/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-kube-api-access-7kr5m\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:35.692146 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0c18b2f2-9374-4ce8-9cf2-f87d073342ce","Type":"ContainerDied","Data":"f21ec7dc6ab10aeea1357f10ec0e0c10411804ac6112e8da83dacb245ed3ef30"} Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:35.692223 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21ec7dc6ab10aeea1357f10ec0e0c10411804ac6112e8da83dacb245ed3ef30" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:35.692448 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:35.790782 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:39:35 crc kubenswrapper[5099]: I0121 18:39:35.857192 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:36 crc kubenswrapper[5099]: I0121 18:39:36.591650 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0c18b2f2-9374-4ce8-9cf2-f87d073342ce" (UID: "0c18b2f2-9374-4ce8-9cf2-f87d073342ce"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:39:36 crc kubenswrapper[5099]: I0121 18:39:36.669188 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c18b2f2-9374-4ce8-9cf2-f87d073342ce-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.783299 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv"] Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784701 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="docker-build" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784752 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="docker-build" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784773 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="git-clone" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784789 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="git-clone" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784797 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="manage-dockerfile" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784802 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="manage-dockerfile" Jan 21 18:39:40 crc kubenswrapper[5099]: I0121 18:39:40.784952 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c18b2f2-9374-4ce8-9cf2-f87d073342ce" containerName="docker-build" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.019692 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv"] Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.020157 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.023322 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-k7zpb\"" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.162027 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6htl\" (UniqueName: \"kubernetes.io/projected/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-kube-api-access-p6htl\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.162260 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-runner\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.264054 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p6htl\" (UniqueName: \"kubernetes.io/projected/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-kube-api-access-p6htl\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.264178 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-runner\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.264715 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-runner\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.292270 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6htl\" (UniqueName: \"kubernetes.io/projected/bfae1586-4cb9-4058-a0e1-151a2e3b5ad7-kube-api-access-p6htl\") pod \"smart-gateway-operator-86d44c8fc9-hzgpv\" (UID: \"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7\") " pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.349434 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.610356 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv"] Jan 21 18:39:41 crc kubenswrapper[5099]: W0121 18:39:41.617462 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfae1586_4cb9_4058_a0e1_151a2e3b5ad7.slice/crio-5e91a37497c4843f5fe41fcca6efaa139276b53426e50eda44d7848215198f16 WatchSource:0}: Error finding container 5e91a37497c4843f5fe41fcca6efaa139276b53426e50eda44d7848215198f16: Status 404 returned error can't find the container with id 5e91a37497c4843f5fe41fcca6efaa139276b53426e50eda44d7848215198f16 Jan 21 18:39:41 crc kubenswrapper[5099]: I0121 18:39:41.740356 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" event={"ID":"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7","Type":"ContainerStarted","Data":"5e91a37497c4843f5fe41fcca6efaa139276b53426e50eda44d7848215198f16"} Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.348630 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk"] Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.354339 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.358132 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-msdm8\"" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.364632 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk"] Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.528637 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/00721a47-1d2e-4b1f-8379-74e69855906d-runner\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.529010 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrxm\" (UniqueName: \"kubernetes.io/projected/00721a47-1d2e-4b1f-8379-74e69855906d-kube-api-access-lhrxm\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.630833 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/00721a47-1d2e-4b1f-8379-74e69855906d-runner\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.630934 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrxm\" (UniqueName: \"kubernetes.io/projected/00721a47-1d2e-4b1f-8379-74e69855906d-kube-api-access-lhrxm\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.632099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/00721a47-1d2e-4b1f-8379-74e69855906d-runner\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.663875 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrxm\" (UniqueName: \"kubernetes.io/projected/00721a47-1d2e-4b1f-8379-74e69855906d-kube-api-access-lhrxm\") pod \"service-telemetry-operator-7d4d5cb5f7-p4dpk\" (UID: \"00721a47-1d2e-4b1f-8379-74e69855906d\") " pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:44 crc kubenswrapper[5099]: I0121 18:39:44.686622 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" Jan 21 18:39:45 crc kubenswrapper[5099]: I0121 18:39:45.015373 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk"] Jan 21 18:39:45 crc kubenswrapper[5099]: W0121 18:39:45.019438 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00721a47_1d2e_4b1f_8379_74e69855906d.slice/crio-245b1b9558b390271542da4c4220e3adb909ca129962dd528e37f3a6eea70e4e WatchSource:0}: Error finding container 245b1b9558b390271542da4c4220e3adb909ca129962dd528e37f3a6eea70e4e: Status 404 returned error can't find the container with id 245b1b9558b390271542da4c4220e3adb909ca129962dd528e37f3a6eea70e4e Jan 21 18:39:45 crc kubenswrapper[5099]: I0121 18:39:45.805484 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" event={"ID":"00721a47-1d2e-4b1f-8379-74e69855906d","Type":"ContainerStarted","Data":"245b1b9558b390271542da4c4220e3adb909ca129962dd528e37f3a6eea70e4e"} Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.139704 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483680-jwp8v"] Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.161445 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483680-jwp8v"] Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.161602 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.164348 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.164515 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.164784 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.257377 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjnlb\" (UniqueName: \"kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb\") pod \"auto-csr-approver-29483680-jwp8v\" (UID: \"da873b20-de27-4eff-87df-d71a7310be1e\") " pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.359124 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjnlb\" (UniqueName: \"kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb\") pod \"auto-csr-approver-29483680-jwp8v\" (UID: \"da873b20-de27-4eff-87df-d71a7310be1e\") " pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.378817 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjnlb\" (UniqueName: \"kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb\") pod \"auto-csr-approver-29483680-jwp8v\" (UID: \"da873b20-de27-4eff-87df-d71a7310be1e\") " pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:00 crc kubenswrapper[5099]: I0121 18:40:00.484352 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:03 crc kubenswrapper[5099]: I0121 18:40:03.461986 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483680-jwp8v"] Jan 21 18:40:04 crc kubenswrapper[5099]: I0121 18:40:04.036356 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" event={"ID":"bfae1586-4cb9-4058-a0e1-151a2e3b5ad7","Type":"ContainerStarted","Data":"eded928dba4e477aa1c10d0469beb3847c752ab6adea5b8db85c9801b8d8a25a"} Jan 21 18:40:04 crc kubenswrapper[5099]: I0121 18:40:04.038980 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" event={"ID":"da873b20-de27-4eff-87df-d71a7310be1e","Type":"ContainerStarted","Data":"1d3a4c4e25308d4996ad448ce108adad255ebf8fd5dbae1784d700b2161536c0"} Jan 21 18:40:04 crc kubenswrapper[5099]: I0121 18:40:04.060171 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-86d44c8fc9-hzgpv" podStartSLOduration=2.379370985 podStartE2EDuration="24.060143222s" podCreationTimestamp="2026-01-21 18:39:40 +0000 UTC" firstStartedPulling="2026-01-21 18:39:41.619815148 +0000 UTC m=+1539.033777609" lastFinishedPulling="2026-01-21 18:40:03.300587385 +0000 UTC m=+1560.714549846" observedRunningTime="2026-01-21 18:40:04.057420185 +0000 UTC m=+1561.471382666" watchObservedRunningTime="2026-01-21 18:40:04.060143222 +0000 UTC m=+1561.474105683" Jan 21 18:40:05 crc kubenswrapper[5099]: I0121 18:40:05.047130 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" event={"ID":"da873b20-de27-4eff-87df-d71a7310be1e","Type":"ContainerStarted","Data":"632ae0b6af8c1e8a0bfd3336c0bc339e9face251ea1970e3f7cbffd91e62d5fb"} Jan 21 18:40:05 crc kubenswrapper[5099]: I0121 18:40:05.069411 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" podStartSLOduration=3.8767145100000002 podStartE2EDuration="5.069379173s" podCreationTimestamp="2026-01-21 18:40:00 +0000 UTC" firstStartedPulling="2026-01-21 18:40:03.469983811 +0000 UTC m=+1560.883946272" lastFinishedPulling="2026-01-21 18:40:04.662648474 +0000 UTC m=+1562.076610935" observedRunningTime="2026-01-21 18:40:05.065600631 +0000 UTC m=+1562.479563092" watchObservedRunningTime="2026-01-21 18:40:05.069379173 +0000 UTC m=+1562.483341634" Jan 21 18:40:06 crc kubenswrapper[5099]: I0121 18:40:06.058305 5099 generic.go:358] "Generic (PLEG): container finished" podID="da873b20-de27-4eff-87df-d71a7310be1e" containerID="632ae0b6af8c1e8a0bfd3336c0bc339e9face251ea1970e3f7cbffd91e62d5fb" exitCode=0 Jan 21 18:40:06 crc kubenswrapper[5099]: I0121 18:40:06.058725 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" event={"ID":"da873b20-de27-4eff-87df-d71a7310be1e","Type":"ContainerDied","Data":"632ae0b6af8c1e8a0bfd3336c0bc339e9face251ea1970e3f7cbffd91e62d5fb"} Jan 21 18:40:08 crc kubenswrapper[5099]: I0121 18:40:08.624526 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:08 crc kubenswrapper[5099]: I0121 18:40:08.713865 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjnlb\" (UniqueName: \"kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb\") pod \"da873b20-de27-4eff-87df-d71a7310be1e\" (UID: \"da873b20-de27-4eff-87df-d71a7310be1e\") " Jan 21 18:40:08 crc kubenswrapper[5099]: I0121 18:40:08.737644 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb" (OuterVolumeSpecName: "kube-api-access-rjnlb") pod "da873b20-de27-4eff-87df-d71a7310be1e" (UID: "da873b20-de27-4eff-87df-d71a7310be1e"). InnerVolumeSpecName "kube-api-access-rjnlb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:40:08 crc kubenswrapper[5099]: I0121 18:40:08.816298 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjnlb\" (UniqueName: \"kubernetes.io/projected/da873b20-de27-4eff-87df-d71a7310be1e-kube-api-access-rjnlb\") on node \"crc\" DevicePath \"\"" Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.099502 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" event={"ID":"da873b20-de27-4eff-87df-d71a7310be1e","Type":"ContainerDied","Data":"1d3a4c4e25308d4996ad448ce108adad255ebf8fd5dbae1784d700b2161536c0"} Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.099553 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3a4c4e25308d4996ad448ce108adad255ebf8fd5dbae1784d700b2161536c0" Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.099646 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483680-jwp8v" Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.684279 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483674-mc59c"] Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.689814 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483674-mc59c"] Jan 21 18:40:09 crc kubenswrapper[5099]: I0121 18:40:09.940855 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c963dad-e808-40d6-b540-225e829dc1af" path="/var/lib/kubelet/pods/3c963dad-e808-40d6-b540-225e829dc1af/volumes" Jan 21 18:40:10 crc kubenswrapper[5099]: I0121 18:40:10.116159 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" event={"ID":"00721a47-1d2e-4b1f-8379-74e69855906d","Type":"ContainerStarted","Data":"ebcd080415926131294c6bb93b665fccb3133b4c77f11b02115b71218eda5af1"} Jan 21 18:40:10 crc kubenswrapper[5099]: I0121 18:40:10.138467 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7d4d5cb5f7-p4dpk" podStartSLOduration=2.103353464 podStartE2EDuration="26.138446665s" podCreationTimestamp="2026-01-21 18:39:44 +0000 UTC" firstStartedPulling="2026-01-21 18:39:45.034040948 +0000 UTC m=+1542.448003409" lastFinishedPulling="2026-01-21 18:40:09.069134149 +0000 UTC m=+1566.483096610" observedRunningTime="2026-01-21 18:40:10.137333238 +0000 UTC m=+1567.551295699" watchObservedRunningTime="2026-01-21 18:40:10.138446665 +0000 UTC m=+1567.552409126" Jan 21 18:40:17 crc kubenswrapper[5099]: I0121 18:40:17.709485 5099 scope.go:117] "RemoveContainer" containerID="e96bfbe549960756682345787e070fe9c280eed8c0e99dd52837af809d26d8df" Jan 21 18:40:22 crc kubenswrapper[5099]: I0121 18:40:22.065468 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:40:22 crc kubenswrapper[5099]: I0121 18:40:22.066435 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:40:38 crc kubenswrapper[5099]: I0121 18:40:38.267409 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:40:38 crc kubenswrapper[5099]: I0121 18:40:38.269233 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da873b20-de27-4eff-87df-d71a7310be1e" containerName="oc" Jan 21 18:40:38 crc kubenswrapper[5099]: I0121 18:40:38.269255 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="da873b20-de27-4eff-87df-d71a7310be1e" containerName="oc" Jan 21 18:40:38 crc kubenswrapper[5099]: I0121 18:40:38.269445 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="da873b20-de27-4eff-87df-d71a7310be1e" containerName="oc" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.227079 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.227303 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.233603 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.233701 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.233990 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.234070 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.234228 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.235824 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-snq56\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.236776 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.263348 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv2w\" (UniqueName: \"kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.263880 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.263987 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.264101 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.264201 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.264281 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.264385 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.365791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxv2w\" (UniqueName: \"kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.365850 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.365881 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.365923 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.365966 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.366002 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.366025 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.367105 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.374077 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.374274 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.382650 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.387927 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxv2w\" (UniqueName: \"kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.388252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.398801 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users\") pod \"default-interconnect-55bf8d5cb-w5kvd\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.551413 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:40:39 crc kubenswrapper[5099]: I0121 18:40:39.778477 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:40:40 crc kubenswrapper[5099]: I0121 18:40:40.379248 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" event={"ID":"6d433b03-9cad-429b-b20e-b0e71b410375","Type":"ContainerStarted","Data":"55ccd61bdf930a663350d21f66bf9f24b90cfbd9a39148d8c7b6370d07198b3d"} Jan 21 18:40:45 crc kubenswrapper[5099]: I0121 18:40:45.427663 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" event={"ID":"6d433b03-9cad-429b-b20e-b0e71b410375","Type":"ContainerStarted","Data":"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21"} Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.133129 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" podStartSLOduration=7.05068804 podStartE2EDuration="12.133098788s" podCreationTimestamp="2026-01-21 18:40:38 +0000 UTC" firstStartedPulling="2026-01-21 18:40:39.78605133 +0000 UTC m=+1597.200013791" lastFinishedPulling="2026-01-21 18:40:44.868462078 +0000 UTC m=+1602.282424539" observedRunningTime="2026-01-21 18:40:45.453446481 +0000 UTC m=+1602.867408942" watchObservedRunningTime="2026-01-21 18:40:50.133098788 +0000 UTC m=+1607.547061249" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.138014 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.391652 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.391940 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.396398 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.396926 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397160 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397293 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397329 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-zllxw\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397372 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397672 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397722 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.397800 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.405594 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556237 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-tls-assets\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556321 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556512 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556579 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556656 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556773 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556814 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52cw\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-kube-api-access-w52cw\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556849 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556900 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556925 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-web-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.556982 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60afeeee-13e5-4557-8409-391a5ae528c8-config-out\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.557041 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658338 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658382 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658561 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w52cw\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-kube-api-access-w52cw\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: E0121 18:40:50.658724 5099 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658748 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: E0121 18:40:50.658852 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls podName:60afeeee-13e5-4557-8409-391a5ae528c8 nodeName:}" failed. No retries permitted until 2026-01-21 18:40:51.158824318 +0000 UTC m=+1608.572786949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "60afeeee-13e5-4557-8409-391a5ae528c8") : secret "default-prometheus-proxy-tls" not found Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658915 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658948 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-web-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.658983 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60afeeee-13e5-4557-8409-391a5ae528c8-config-out\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.659082 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.659169 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-tls-assets\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.659211 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.659378 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.659440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.661030 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.661251 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.661495 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.662045 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/60afeeee-13e5-4557-8409-391a5ae528c8-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.663938 5099 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.664029 5099 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1648bf321e4a6a1837427871462e5fc73bc1685f3f3bc253ca4c4018d82d5250/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.666872 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-web-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.667126 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-config\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.667199 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-tls-assets\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.677796 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.686678 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52cw\" (UniqueName: \"kubernetes.io/projected/60afeeee-13e5-4557-8409-391a5ae528c8-kube-api-access-w52cw\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.697714 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db630ed3-8b62-4734-af19-323f6e7d5fb0\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:50 crc kubenswrapper[5099]: I0121 18:40:50.699936 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60afeeee-13e5-4557-8409-391a5ae528c8-config-out\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:51 crc kubenswrapper[5099]: I0121 18:40:51.169832 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:51 crc kubenswrapper[5099]: I0121 18:40:51.176482 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/60afeeee-13e5-4557-8409-391a5ae528c8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"60afeeee-13e5-4557-8409-391a5ae528c8\") " pod="service-telemetry/prometheus-default-0" Jan 21 18:40:51 crc kubenswrapper[5099]: I0121 18:40:51.317976 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 18:40:51 crc kubenswrapper[5099]: I0121 18:40:51.558189 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 18:40:52 crc kubenswrapper[5099]: I0121 18:40:52.064881 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:40:52 crc kubenswrapper[5099]: I0121 18:40:52.065004 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:40:52 crc kubenswrapper[5099]: I0121 18:40:52.481868 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerStarted","Data":"ba0b091b25d3c407f163d9a8707a24f8ffdbd1d62a6c1e9e2a885139d7a94130"} Jan 21 18:40:55 crc kubenswrapper[5099]: I0121 18:40:55.506214 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerStarted","Data":"86ad5bffe4bd9261d2af081e84667a73347b89feb918cc9d6af04791abb62bdc"} Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.093837 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-rk86p"] Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.112333 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.129235 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-rk86p"] Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.196533 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4xr\" (UniqueName: \"kubernetes.io/projected/d63f0418-bf6c-4a0a-8b72-8fa1215358c0-kube-api-access-pv4xr\") pod \"default-snmp-webhook-694dc457d5-rk86p\" (UID: \"d63f0418-bf6c-4a0a-8b72-8fa1215358c0\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.298873 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4xr\" (UniqueName: \"kubernetes.io/projected/d63f0418-bf6c-4a0a-8b72-8fa1215358c0-kube-api-access-pv4xr\") pod \"default-snmp-webhook-694dc457d5-rk86p\" (UID: \"d63f0418-bf6c-4a0a-8b72-8fa1215358c0\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.322769 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4xr\" (UniqueName: \"kubernetes.io/projected/d63f0418-bf6c-4a0a-8b72-8fa1215358c0-kube-api-access-pv4xr\") pod \"default-snmp-webhook-694dc457d5-rk86p\" (UID: \"d63f0418-bf6c-4a0a-8b72-8fa1215358c0\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.431365 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" Jan 21 18:41:01 crc kubenswrapper[5099]: I0121 18:41:01.657068 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-rk86p"] Jan 21 18:41:02 crc kubenswrapper[5099]: I0121 18:41:02.569092 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" event={"ID":"d63f0418-bf6c-4a0a-8b72-8fa1215358c0","Type":"ContainerStarted","Data":"0dabb772c15f507eb0835811bcdbc8328e8f3f48b3ed1ec112aa3bf8361e78fe"} Jan 21 18:41:03 crc kubenswrapper[5099]: I0121 18:41:03.581546 5099 generic.go:358] "Generic (PLEG): container finished" podID="60afeeee-13e5-4557-8409-391a5ae528c8" containerID="86ad5bffe4bd9261d2af081e84667a73347b89feb918cc9d6af04791abb62bdc" exitCode=0 Jan 21 18:41:03 crc kubenswrapper[5099]: I0121 18:41:03.581662 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerDied","Data":"86ad5bffe4bd9261d2af081e84667a73347b89feb918cc9d6af04791abb62bdc"} Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.024486 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.072993 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.073220 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.075807 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.075928 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.076218 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.076385 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.077233 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-66bhx\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.079094 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.162823 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-out\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.162887 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzk9q\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-kube-api-access-xzk9q\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.162958 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-volume\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.162980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-web-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.163343 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.163451 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.163507 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.166076 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.166341 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-tls-assets\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269113 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269176 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269228 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269280 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-tls-assets\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269335 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-out\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269383 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xzk9q\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-kube-api-access-xzk9q\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-volume\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269462 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-web-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.269550 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: E0121 18:41:05.269836 5099 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:05 crc kubenswrapper[5099]: E0121 18:41:05.269952 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls podName:463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe nodeName:}" failed. No retries permitted until 2026-01-21 18:41:05.769921937 +0000 UTC m=+1623.183884398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe") : secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.273015 5099 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.273072 5099 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a2d72269873ad6d834e42afe644680d421aa20e86c3d54ba29dbbfdfd4612ba9/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.281962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-tls-assets\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.282392 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.283495 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.285319 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-out\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.285500 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-config-volume\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.286192 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-web-config\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.294568 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzk9q\" (UniqueName: \"kubernetes.io/projected/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-kube-api-access-xzk9q\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.311979 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24a5c41a-ecf3-4582-9ec8-4dda71a02986\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: I0121 18:41:05.779818 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:05 crc kubenswrapper[5099]: E0121 18:41:05.780096 5099 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:05 crc kubenswrapper[5099]: E0121 18:41:05.780186 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls podName:463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe nodeName:}" failed. No retries permitted until 2026-01-21 18:41:06.780161826 +0000 UTC m=+1624.194124287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe") : secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:06 crc kubenswrapper[5099]: I0121 18:41:06.797623 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:06 crc kubenswrapper[5099]: E0121 18:41:06.797884 5099 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:06 crc kubenswrapper[5099]: E0121 18:41:06.798166 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls podName:463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe nodeName:}" failed. No retries permitted until 2026-01-21 18:41:08.798134293 +0000 UTC m=+1626.212096754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe") : secret "default-alertmanager-proxy-tls" not found Jan 21 18:41:08 crc kubenswrapper[5099]: I0121 18:41:08.852281 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:08 crc kubenswrapper[5099]: I0121 18:41:08.859450 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe\") " pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:08 crc kubenswrapper[5099]: I0121 18:41:08.990903 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 18:41:09 crc kubenswrapper[5099]: I0121 18:41:09.527153 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 18:41:09 crc kubenswrapper[5099]: W0121 18:41:09.575260 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463f5beb_d2e5_4316_9cce_a1f8ab3ca4fe.slice/crio-53a8897affa74e8df036a86d8dd4ba3cc020bb351b1b23f8d96ec1c967c22f89 WatchSource:0}: Error finding container 53a8897affa74e8df036a86d8dd4ba3cc020bb351b1b23f8d96ec1c967c22f89: Status 404 returned error can't find the container with id 53a8897affa74e8df036a86d8dd4ba3cc020bb351b1b23f8d96ec1c967c22f89 Jan 21 18:41:09 crc kubenswrapper[5099]: I0121 18:41:09.642971 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerStarted","Data":"53a8897affa74e8df036a86d8dd4ba3cc020bb351b1b23f8d96ec1c967c22f89"} Jan 21 18:41:10 crc kubenswrapper[5099]: I0121 18:41:10.652312 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" event={"ID":"d63f0418-bf6c-4a0a-8b72-8fa1215358c0","Type":"ContainerStarted","Data":"cf6a7b76a3f7d95da8802c90c7da8c1d88ec08aa2b124622dab48d14b7c96d6b"} Jan 21 18:41:10 crc kubenswrapper[5099]: I0121 18:41:10.673799 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-rk86p" podStartSLOduration=1.7700916530000002 podStartE2EDuration="9.673773923s" podCreationTimestamp="2026-01-21 18:41:01 +0000 UTC" firstStartedPulling="2026-01-21 18:41:01.668483994 +0000 UTC m=+1619.082446455" lastFinishedPulling="2026-01-21 18:41:09.572166254 +0000 UTC m=+1626.986128725" observedRunningTime="2026-01-21 18:41:10.670526804 +0000 UTC m=+1628.084489265" watchObservedRunningTime="2026-01-21 18:41:10.673773923 +0000 UTC m=+1628.087736384" Jan 21 18:41:12 crc kubenswrapper[5099]: I0121 18:41:12.679928 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerStarted","Data":"cc245b0dc2cbe10d9ab4e627c702672fd025d3e2dc313d7e50043577359e9a8a"} Jan 21 18:41:13 crc kubenswrapper[5099]: I0121 18:41:13.705535 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerStarted","Data":"fec1bd809aeb32b8fc97cbd4f478f7a3a19663f3ce5023b136b7c841a7938e30"} Jan 21 18:41:17 crc kubenswrapper[5099]: I0121 18:41:17.743703 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerStarted","Data":"acfb0931b17081fd3b8fd58212dd8ad161efb9e314666f2a6c1ff8873a3431fc"} Jan 21 18:41:19 crc kubenswrapper[5099]: I0121 18:41:19.760715 5099 generic.go:358] "Generic (PLEG): container finished" podID="463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe" containerID="cc245b0dc2cbe10d9ab4e627c702672fd025d3e2dc313d7e50043577359e9a8a" exitCode=0 Jan 21 18:41:19 crc kubenswrapper[5099]: I0121 18:41:19.760799 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerDied","Data":"cc245b0dc2cbe10d9ab4e627c702672fd025d3e2dc313d7e50043577359e9a8a"} Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.244261 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk"] Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.268412 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk"] Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.268543 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.272935 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.273284 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.273572 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.273807 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-4wbfp\"" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.357534 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0afa2545-4e28-415f-b67f-e1825e024da4-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.357637 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.357672 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.357835 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0afa2545-4e28-415f-b67f-e1825e024da4-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.357865 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmbrt\" (UniqueName: \"kubernetes.io/projected/0afa2545-4e28-415f-b67f-e1825e024da4-kube-api-access-hmbrt\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.459179 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0afa2545-4e28-415f-b67f-e1825e024da4-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.459287 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.459319 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.459336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0afa2545-4e28-415f-b67f-e1825e024da4-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.459356 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmbrt\" (UniqueName: \"kubernetes.io/projected/0afa2545-4e28-415f-b67f-e1825e024da4-kube-api-access-hmbrt\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: E0121 18:41:20.459712 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 18:41:20 crc kubenswrapper[5099]: E0121 18:41:20.459897 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls podName:0afa2545-4e28-415f-b67f-e1825e024da4 nodeName:}" failed. No retries permitted until 2026-01-21 18:41:20.959867509 +0000 UTC m=+1638.373829970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" (UID: "0afa2545-4e28-415f-b67f-e1825e024da4") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.460606 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0afa2545-4e28-415f-b67f-e1825e024da4-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.461001 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0afa2545-4e28-415f-b67f-e1825e024da4-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.472395 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.480109 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmbrt\" (UniqueName: \"kubernetes.io/projected/0afa2545-4e28-415f-b67f-e1825e024da4-kube-api-access-hmbrt\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: I0121 18:41:20.968490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:20 crc kubenswrapper[5099]: E0121 18:41:20.968614 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 18:41:20 crc kubenswrapper[5099]: E0121 18:41:20.969244 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls podName:0afa2545-4e28-415f-b67f-e1825e024da4 nodeName:}" failed. No retries permitted until 2026-01-21 18:41:21.969217786 +0000 UTC m=+1639.383180247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" (UID: "0afa2545-4e28-415f-b67f-e1825e024da4") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 18:41:21 crc kubenswrapper[5099]: I0121 18:41:21.984518 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.002795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0afa2545-4e28-415f-b67f-e1825e024da4-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk\" (UID: \"0afa2545-4e28-415f-b67f-e1825e024da4\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.064481 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.064581 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.064648 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.065714 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.065822 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" gracePeriod=600 Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.113076 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.809483 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" exitCode=0 Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.809539 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da"} Jan 21 18:41:22 crc kubenswrapper[5099]: I0121 18:41:22.809622 5099 scope.go:117] "RemoveContainer" containerID="c34863c08d0134cd7b5207ebf16a5d100ecccdeb0556f0934b642e587f43c4fa" Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.141455 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.168301 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq"] Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.191553 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq"] Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.191585 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.195434 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.195861 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.310122 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.310188 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82cf411-eed8-4850-9fbf-a0c128c16d13-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.310218 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82cf411-eed8-4850-9fbf-a0c128c16d13-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.310551 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.310689 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bhn\" (UniqueName: \"kubernetes.io/projected/a82cf411-eed8-4850-9fbf-a0c128c16d13-kube-api-access-74bhn\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.411834 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.412366 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74bhn\" (UniqueName: \"kubernetes.io/projected/a82cf411-eed8-4850-9fbf-a0c128c16d13-kube-api-access-74bhn\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.412486 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.412516 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82cf411-eed8-4850-9fbf-a0c128c16d13-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.412544 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82cf411-eed8-4850-9fbf-a0c128c16d13-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.412749 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.412874 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls podName:a82cf411-eed8-4850-9fbf-a0c128c16d13 nodeName:}" failed. No retries permitted until 2026-01-21 18:41:23.912843031 +0000 UTC m=+1641.326805492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" (UID: "a82cf411-eed8-4850-9fbf-a0c128c16d13") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.413463 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a82cf411-eed8-4850-9fbf-a0c128c16d13-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.413898 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a82cf411-eed8-4850-9fbf-a0c128c16d13-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.424903 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.432285 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bhn\" (UniqueName: \"kubernetes.io/projected/a82cf411-eed8-4850-9fbf-a0c128c16d13-kube-api-access-74bhn\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.819012 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.819473 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:41:23 crc kubenswrapper[5099]: I0121 18:41:23.920996 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.921178 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 18:41:23 crc kubenswrapper[5099]: E0121 18:41:23.921261 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls podName:a82cf411-eed8-4850-9fbf-a0c128c16d13 nodeName:}" failed. No retries permitted until 2026-01-21 18:41:24.921238634 +0000 UTC m=+1642.335201095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" (UID: "a82cf411-eed8-4850-9fbf-a0c128c16d13") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 18:41:24 crc kubenswrapper[5099]: I0121 18:41:24.499484 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk"] Jan 21 18:41:24 crc kubenswrapper[5099]: I0121 18:41:24.943619 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:24 crc kubenswrapper[5099]: I0121 18:41:24.966185 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a82cf411-eed8-4850-9fbf-a0c128c16d13-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq\" (UID: \"a82cf411-eed8-4850-9fbf-a0c128c16d13\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:25 crc kubenswrapper[5099]: I0121 18:41:25.020488 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" Jan 21 18:41:25 crc kubenswrapper[5099]: W0121 18:41:25.551931 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0afa2545_4e28_415f_b67f_e1825e024da4.slice/crio-6cf0182864b0c22d7f13d10f2a75efb5d1d75f860e2c2b0c07955cb21f362cc2 WatchSource:0}: Error finding container 6cf0182864b0c22d7f13d10f2a75efb5d1d75f860e2c2b0c07955cb21f362cc2: Status 404 returned error can't find the container with id 6cf0182864b0c22d7f13d10f2a75efb5d1d75f860e2c2b0c07955cb21f362cc2 Jan 21 18:41:25 crc kubenswrapper[5099]: I0121 18:41:25.838462 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"6cf0182864b0c22d7f13d10f2a75efb5d1d75f860e2c2b0c07955cb21f362cc2"} Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.227315 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq"] Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.857053 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"01e2c3c81d5e43513f92e870eed6a1d057a603d3df54a932a61e8e59f8067cf6"} Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.862390 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerStarted","Data":"4367d323e7dc1c78385684cea365b5e3f9cbc72dfbd4afbfbcb7d20ee93bab2c"} Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.864054 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"30ff6386a044c69071433580faf6bf0162b14b906ae759c5d34be3b20be8de07"} Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.867084 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"60afeeee-13e5-4557-8409-391a5ae528c8","Type":"ContainerStarted","Data":"2354d4a7817bd18a18a1496854e8baa51adccc6b3948c7aa8059f7b030cc404b"} Jan 21 18:41:26 crc kubenswrapper[5099]: I0121 18:41:26.900422 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.7220409290000003 podStartE2EDuration="37.90039944s" podCreationTimestamp="2026-01-21 18:40:49 +0000 UTC" firstStartedPulling="2026-01-21 18:40:51.567580274 +0000 UTC m=+1608.981542735" lastFinishedPulling="2026-01-21 18:41:25.745938785 +0000 UTC m=+1643.159901246" observedRunningTime="2026-01-21 18:41:26.896302159 +0000 UTC m=+1644.310264610" watchObservedRunningTime="2026-01-21 18:41:26.90039944 +0000 UTC m=+1644.314361901" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.366773 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv"] Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.379749 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.384883 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv"] Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.390140 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.390334 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.510264 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.510322 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/201688ad-f074-4dc2-9033-36f09f9e4a9d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.510361 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/201688ad-f074-4dc2-9033-36f09f9e4a9d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.510435 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxdn4\" (UniqueName: \"kubernetes.io/projected/201688ad-f074-4dc2-9033-36f09f9e4a9d-kube-api-access-qxdn4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.510747 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.612212 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdn4\" (UniqueName: \"kubernetes.io/projected/201688ad-f074-4dc2-9033-36f09f9e4a9d-kube-api-access-qxdn4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.612261 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.612310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.612843 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/201688ad-f074-4dc2-9033-36f09f9e4a9d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.612921 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/201688ad-f074-4dc2-9033-36f09f9e4a9d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: E0121 18:41:27.612957 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:27 crc kubenswrapper[5099]: E0121 18:41:27.613064 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls podName:201688ad-f074-4dc2-9033-36f09f9e4a9d nodeName:}" failed. No retries permitted until 2026-01-21 18:41:28.113039375 +0000 UTC m=+1645.527002006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" (UID: "201688ad-f074-4dc2-9033-36f09f9e4a9d") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.613422 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/201688ad-f074-4dc2-9033-36f09f9e4a9d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.614328 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/201688ad-f074-4dc2-9033-36f09f9e4a9d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.625372 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.649260 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdn4\" (UniqueName: \"kubernetes.io/projected/201688ad-f074-4dc2-9033-36f09f9e4a9d-kube-api-access-qxdn4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.880598 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"c5af00ec975fde836ac7a6c018585e26fac0f091d9bfd4d24f21125f8556a31c"} Jan 21 18:41:27 crc kubenswrapper[5099]: I0121 18:41:27.884451 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"b7a64f625cf8efa395cc3be58b80b1c3102959a31065a0f55b7680bfab90571e"} Jan 21 18:41:28 crc kubenswrapper[5099]: I0121 18:41:28.123278 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:28 crc kubenswrapper[5099]: E0121 18:41:28.123432 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:28 crc kubenswrapper[5099]: E0121 18:41:28.123623 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls podName:201688ad-f074-4dc2-9033-36f09f9e4a9d nodeName:}" failed. No retries permitted until 2026-01-21 18:41:29.123600912 +0000 UTC m=+1646.537563373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" (UID: "201688ad-f074-4dc2-9033-36f09f9e4a9d") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:28 crc kubenswrapper[5099]: I0121 18:41:28.911812 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerStarted","Data":"1313f1fb4ad5521e7d6ccba050003858666382d2856dbf13ccfcca288f243b3d"} Jan 21 18:41:28 crc kubenswrapper[5099]: I0121 18:41:28.914256 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"eed5b9d69d8b505a509318a4374d1164ec4b71265cc4df15447c90487bd2c74d"} Jan 21 18:41:29 crc kubenswrapper[5099]: I0121 18:41:29.141149 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:29 crc kubenswrapper[5099]: E0121 18:41:29.141384 5099 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:29 crc kubenswrapper[5099]: E0121 18:41:29.141514 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls podName:201688ad-f074-4dc2-9033-36f09f9e4a9d nodeName:}" failed. No retries permitted until 2026-01-21 18:41:31.141482116 +0000 UTC m=+1648.555444577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" (UID: "201688ad-f074-4dc2-9033-36f09f9e4a9d") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 18:41:31 crc kubenswrapper[5099]: I0121 18:41:31.181911 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:31 crc kubenswrapper[5099]: I0121 18:41:31.202168 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/201688ad-f074-4dc2-9033-36f09f9e4a9d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv\" (UID: \"201688ad-f074-4dc2-9033-36f09f9e4a9d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:31 crc kubenswrapper[5099]: I0121 18:41:31.306016 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" Jan 21 18:41:31 crc kubenswrapper[5099]: I0121 18:41:31.318271 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 21 18:41:34 crc kubenswrapper[5099]: I0121 18:41:34.556590 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv"] Jan 21 18:41:34 crc kubenswrapper[5099]: I0121 18:41:34.992456 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"2c68d37424109d2b38f752abadcd4b6b4ea3464df89b81f54cc6a9a80943f09a"} Jan 21 18:41:34 crc kubenswrapper[5099]: I0121 18:41:34.998025 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe","Type":"ContainerStarted","Data":"c9249ca46426b144f7ed2085ca7f0ebac2676c7dd01709fc4ff597bb1fe3da2c"} Jan 21 18:41:35 crc kubenswrapper[5099]: I0121 18:41:35.000791 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"2bab1310e4c0c50f1b785253a8e4bef1730ae1cb5e2d3b9dd3101da97eedca45"} Jan 21 18:41:35 crc kubenswrapper[5099]: I0121 18:41:35.003257 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"29d781e9eb0bbd05d1eb154d87e0116cc21118a5aa7292076ad4a9c3701a15a8"} Jan 21 18:41:35 crc kubenswrapper[5099]: I0121 18:41:35.014783 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" podStartSLOduration=6.433669179 podStartE2EDuration="15.01476039s" podCreationTimestamp="2026-01-21 18:41:20 +0000 UTC" firstStartedPulling="2026-01-21 18:41:25.555165774 +0000 UTC m=+1642.969128235" lastFinishedPulling="2026-01-21 18:41:34.136256985 +0000 UTC m=+1651.550219446" observedRunningTime="2026-01-21 18:41:35.012477474 +0000 UTC m=+1652.426439935" watchObservedRunningTime="2026-01-21 18:41:35.01476039 +0000 UTC m=+1652.428722851" Jan 21 18:41:35 crc kubenswrapper[5099]: I0121 18:41:35.053244 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=22.954995544 podStartE2EDuration="32.053223083s" podCreationTimestamp="2026-01-21 18:41:03 +0000 UTC" firstStartedPulling="2026-01-21 18:41:19.762517509 +0000 UTC m=+1637.176479970" lastFinishedPulling="2026-01-21 18:41:28.860745048 +0000 UTC m=+1646.274707509" observedRunningTime="2026-01-21 18:41:35.039069126 +0000 UTC m=+1652.453031577" watchObservedRunningTime="2026-01-21 18:41:35.053223083 +0000 UTC m=+1652.467185564" Jan 21 18:41:35 crc kubenswrapper[5099]: I0121 18:41:35.079883 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" podStartSLOduration=4.216545427 podStartE2EDuration="12.079832926s" podCreationTimestamp="2026-01-21 18:41:23 +0000 UTC" firstStartedPulling="2026-01-21 18:41:26.248070015 +0000 UTC m=+1643.662032476" lastFinishedPulling="2026-01-21 18:41:34.111357514 +0000 UTC m=+1651.525319975" observedRunningTime="2026-01-21 18:41:35.075146631 +0000 UTC m=+1652.489109102" watchObservedRunningTime="2026-01-21 18:41:35.079832926 +0000 UTC m=+1652.493795407" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.017441 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"7575cdff2a71725ccf7aa586c4b0c4a57cff09d45f174c06820c854f8261a7ef"} Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.018108 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"a013fff0608be8d166b652444ea748f453afbc290afdd7cdaa470f52e4e25bba"} Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.018128 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"f6fa8b051aab5009ea1985d2c8e1baf0a63a37308d489abf5ebe37773fe3bb09"} Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.042575 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" podStartSLOduration=8.020227034 podStartE2EDuration="9.042552257s" podCreationTimestamp="2026-01-21 18:41:27 +0000 UTC" firstStartedPulling="2026-01-21 18:41:34.563521288 +0000 UTC m=+1651.977483749" lastFinishedPulling="2026-01-21 18:41:35.585846511 +0000 UTC m=+1652.999808972" observedRunningTime="2026-01-21 18:41:36.039577314 +0000 UTC m=+1653.453539775" watchObservedRunningTime="2026-01-21 18:41:36.042552257 +0000 UTC m=+1653.456514718" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.314683 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf"] Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.354811 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.355780 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.360698 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.360996 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.366200 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf"] Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.448303 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.479154 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.479331 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjjz\" (UniqueName: \"kubernetes.io/projected/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-kube-api-access-6vjjz\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.479434 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.479493 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.580995 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjjz\" (UniqueName: \"kubernetes.io/projected/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-kube-api-access-6vjjz\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.581135 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.581779 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.581864 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.581930 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.582772 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.589996 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.607078 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjjz\" (UniqueName: \"kubernetes.io/projected/5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6-kube-api-access-6vjjz\") pod \"default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf\" (UID: \"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.719623 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" Jan 21 18:41:36 crc kubenswrapper[5099]: I0121 18:41:36.914405 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:41:36 crc kubenswrapper[5099]: E0121 18:41:36.915239 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:41:37 crc kubenswrapper[5099]: I0121 18:41:37.069482 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 21 18:41:37 crc kubenswrapper[5099]: I0121 18:41:37.231520 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf"] Jan 21 18:41:38 crc kubenswrapper[5099]: I0121 18:41:38.045507 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerStarted","Data":"ae8f3af011f4395dc9bdce4ecfb806711a54628ce121361ebc98f73410abafca"} Jan 21 18:41:38 crc kubenswrapper[5099]: I0121 18:41:38.045869 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerStarted","Data":"5ce483aad0dda1b07978cf6d946c1900fa1a7f43737de833c578f7e7b8c62e86"} Jan 21 18:41:38 crc kubenswrapper[5099]: I0121 18:41:38.045885 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerStarted","Data":"d491ae8d93b768b60b5b02cdd3592f2a3a05b7b27ac24d312ae54704c9a55aff"} Jan 21 18:41:38 crc kubenswrapper[5099]: I0121 18:41:38.077579 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" podStartSLOduration=1.565660008 podStartE2EDuration="2.077558397s" podCreationTimestamp="2026-01-21 18:41:36 +0000 UTC" firstStartedPulling="2026-01-21 18:41:37.238215043 +0000 UTC m=+1654.652177504" lastFinishedPulling="2026-01-21 18:41:37.750113432 +0000 UTC m=+1655.164075893" observedRunningTime="2026-01-21 18:41:38.075926957 +0000 UTC m=+1655.489889428" watchObservedRunningTime="2026-01-21 18:41:38.077558397 +0000 UTC m=+1655.491520858" Jan 21 18:41:38 crc kubenswrapper[5099]: I0121 18:41:38.709395 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27"] Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.482655 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27"] Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.484452 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.486836 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.568655 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mcjr\" (UniqueName: \"kubernetes.io/projected/ed874079-58bf-48a1-8d42-af4769580a43-kube-api-access-4mcjr\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.569000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ed874079-58bf-48a1-8d42-af4769580a43-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.569138 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ed874079-58bf-48a1-8d42-af4769580a43-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.569454 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed874079-58bf-48a1-8d42-af4769580a43-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.671184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mcjr\" (UniqueName: \"kubernetes.io/projected/ed874079-58bf-48a1-8d42-af4769580a43-kube-api-access-4mcjr\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.671278 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ed874079-58bf-48a1-8d42-af4769580a43-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.671314 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ed874079-58bf-48a1-8d42-af4769580a43-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.671370 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed874079-58bf-48a1-8d42-af4769580a43-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.672004 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed874079-58bf-48a1-8d42-af4769580a43-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.672449 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ed874079-58bf-48a1-8d42-af4769580a43-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.687043 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ed874079-58bf-48a1-8d42-af4769580a43-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.693534 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mcjr\" (UniqueName: \"kubernetes.io/projected/ed874079-58bf-48a1-8d42-af4769580a43-kube-api-access-4mcjr\") pod \"default-cloud1-ceil-event-smartgateway-646c885c84-r2p27\" (UID: \"ed874079-58bf-48a1-8d42-af4769580a43\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:39 crc kubenswrapper[5099]: I0121 18:41:39.803697 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" Jan 21 18:41:40 crc kubenswrapper[5099]: I0121 18:41:40.263137 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27"] Jan 21 18:41:40 crc kubenswrapper[5099]: W0121 18:41:40.267026 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded874079_58bf_48a1_8d42_af4769580a43.slice/crio-7b237b1083b96b60423b316551e5bc8999e65c3fa270bf3407922b06868f039e WatchSource:0}: Error finding container 7b237b1083b96b60423b316551e5bc8999e65c3fa270bf3407922b06868f039e: Status 404 returned error can't find the container with id 7b237b1083b96b60423b316551e5bc8999e65c3fa270bf3407922b06868f039e Jan 21 18:41:40 crc kubenswrapper[5099]: I0121 18:41:40.269259 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:41:41 crc kubenswrapper[5099]: I0121 18:41:41.075839 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerStarted","Data":"7b237b1083b96b60423b316551e5bc8999e65c3fa270bf3407922b06868f039e"} Jan 21 18:41:48 crc kubenswrapper[5099]: I0121 18:41:48.913919 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:41:48 crc kubenswrapper[5099]: E0121 18:41:48.915089 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:41:49 crc kubenswrapper[5099]: I0121 18:41:49.156713 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerStarted","Data":"61d0a8a4eda44eca9fb4ad2ad503de7c7399a3cd6e1a6d34519a6640588d2b3d"} Jan 21 18:41:49 crc kubenswrapper[5099]: I0121 18:41:49.156848 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerStarted","Data":"bf19162a830888c9067296e380076d066a00c9d760a09345fcac7004c26daac6"} Jan 21 18:41:49 crc kubenswrapper[5099]: I0121 18:41:49.180357 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" podStartSLOduration=3.130704207 podStartE2EDuration="11.180330448s" podCreationTimestamp="2026-01-21 18:41:38 +0000 UTC" firstStartedPulling="2026-01-21 18:41:40.270331007 +0000 UTC m=+1657.684293468" lastFinishedPulling="2026-01-21 18:41:48.319957258 +0000 UTC m=+1665.733919709" observedRunningTime="2026-01-21 18:41:49.176562775 +0000 UTC m=+1666.590525246" watchObservedRunningTime="2026-01-21 18:41:49.180330448 +0000 UTC m=+1666.594292909" Jan 21 18:41:57 crc kubenswrapper[5099]: I0121 18:41:57.811579 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:41:57 crc kubenswrapper[5099]: I0121 18:41:57.812931 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" podUID="6d433b03-9cad-429b-b20e-b0e71b410375" containerName="default-interconnect" containerID="cri-o://15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21" gracePeriod=30 Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.221364 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235116 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235174 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxv2w\" (UniqueName: \"kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235262 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235293 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235437 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235501 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.235519 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca\") pod \"6d433b03-9cad-429b-b20e-b0e71b410375\" (UID: \"6d433b03-9cad-429b-b20e-b0e71b410375\") " Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.237001 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.237240 5099 generic.go:358] "Generic (PLEG): container finished" podID="6d433b03-9cad-429b-b20e-b0e71b410375" containerID="15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21" exitCode=0 Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.237791 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.237940 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" event={"ID":"6d433b03-9cad-429b-b20e-b0e71b410375","Type":"ContainerDied","Data":"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21"} Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.238057 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-w5kvd" event={"ID":"6d433b03-9cad-429b-b20e-b0e71b410375","Type":"ContainerDied","Data":"55ccd61bdf930a663350d21f66bf9f24b90cfbd9a39148d8c7b6370d07198b3d"} Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.238087 5099 scope.go:117] "RemoveContainer" containerID="15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.250863 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.252100 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.264991 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w" (OuterVolumeSpecName: "kube-api-access-zxv2w") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "kube-api-access-zxv2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.266015 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.270037 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.278524 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "6d433b03-9cad-429b-b20e-b0e71b410375" (UID: "6d433b03-9cad-429b-b20e-b0e71b410375"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.286855 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xddc7"] Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.287878 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d433b03-9cad-429b-b20e-b0e71b410375" containerName="default-interconnect" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.287904 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d433b03-9cad-429b-b20e-b0e71b410375" containerName="default-interconnect" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.288106 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d433b03-9cad-429b-b20e-b0e71b410375" containerName="default-interconnect" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.332475 5099 scope.go:117] "RemoveContainer" containerID="15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21" Jan 21 18:41:58 crc kubenswrapper[5099]: E0121 18:41:58.339548 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21\": container with ID starting with 15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21 not found: ID does not exist" containerID="15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.339600 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21"} err="failed to get container status \"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21\": rpc error: code = NotFound desc = could not find container \"15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21\": container with ID starting with 15f818bda2f2a55d02c2fb0b27db0a5100735858840bb4d4511fda1140e51a21 not found: ID does not exist" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340527 5099 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340584 5099 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340603 5099 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340620 5099 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340633 5099 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340649 5099 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/6d433b03-9cad-429b-b20e-b0e71b410375-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.340661 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxv2w\" (UniqueName: \"kubernetes.io/projected/6d433b03-9cad-429b-b20e-b0e71b410375-kube-api-access-zxv2w\") on node \"crc\" DevicePath \"\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.832985 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xddc7"] Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.833503 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.833559 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-w5kvd"] Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.833355 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.837689 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.838826 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.839807 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-snq56\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.839969 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.840142 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.840321 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.840923 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.851550 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.851993 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8r9\" (UniqueName: \"kubernetes.io/projected/07059825-6270-48dd-9737-b401f10d1f1e-kube-api-access-4l8r9\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.852100 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/07059825-6270-48dd-9737-b401f10d1f1e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.852225 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.852348 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.852763 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.852871 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.954625 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.954706 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.954819 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.954845 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4l8r9\" (UniqueName: \"kubernetes.io/projected/07059825-6270-48dd-9737-b401f10d1f1e-kube-api-access-4l8r9\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.954864 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/07059825-6270-48dd-9737-b401f10d1f1e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.955281 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.955321 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.956703 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/07059825-6270-48dd-9737-b401f10d1f1e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.960720 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.961020 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.961782 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.963631 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.964000 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/07059825-6270-48dd-9737-b401f10d1f1e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:58 crc kubenswrapper[5099]: I0121 18:41:58.979588 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l8r9\" (UniqueName: \"kubernetes.io/projected/07059825-6270-48dd-9737-b401f10d1f1e-kube-api-access-4l8r9\") pod \"default-interconnect-55bf8d5cb-xddc7\" (UID: \"07059825-6270-48dd-9737-b401f10d1f1e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.170835 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.252576 5099 generic.go:358] "Generic (PLEG): container finished" podID="0afa2545-4e28-415f-b67f-e1825e024da4" containerID="c5af00ec975fde836ac7a6c018585e26fac0f091d9bfd4d24f21125f8556a31c" exitCode=0 Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.252655 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerDied","Data":"c5af00ec975fde836ac7a6c018585e26fac0f091d9bfd4d24f21125f8556a31c"} Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.253375 5099 scope.go:117] "RemoveContainer" containerID="c5af00ec975fde836ac7a6c018585e26fac0f091d9bfd4d24f21125f8556a31c" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.254690 5099 generic.go:358] "Generic (PLEG): container finished" podID="ed874079-58bf-48a1-8d42-af4769580a43" containerID="bf19162a830888c9067296e380076d066a00c9d760a09345fcac7004c26daac6" exitCode=0 Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.254796 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerDied","Data":"bf19162a830888c9067296e380076d066a00c9d760a09345fcac7004c26daac6"} Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.255522 5099 scope.go:117] "RemoveContainer" containerID="bf19162a830888c9067296e380076d066a00c9d760a09345fcac7004c26daac6" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.301936 5099 generic.go:358] "Generic (PLEG): container finished" podID="a82cf411-eed8-4850-9fbf-a0c128c16d13" containerID="eed5b9d69d8b505a509318a4374d1164ec4b71265cc4df15447c90487bd2c74d" exitCode=0 Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.303004 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerDied","Data":"eed5b9d69d8b505a509318a4374d1164ec4b71265cc4df15447c90487bd2c74d"} Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.303992 5099 scope.go:117] "RemoveContainer" containerID="eed5b9d69d8b505a509318a4374d1164ec4b71265cc4df15447c90487bd2c74d" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.326107 5099 generic.go:358] "Generic (PLEG): container finished" podID="5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6" containerID="5ce483aad0dda1b07978cf6d946c1900fa1a7f43737de833c578f7e7b8c62e86" exitCode=0 Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.326259 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerDied","Data":"5ce483aad0dda1b07978cf6d946c1900fa1a7f43737de833c578f7e7b8c62e86"} Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.326855 5099 scope.go:117] "RemoveContainer" containerID="5ce483aad0dda1b07978cf6d946c1900fa1a7f43737de833c578f7e7b8c62e86" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.369009 5099 generic.go:358] "Generic (PLEG): container finished" podID="201688ad-f074-4dc2-9033-36f09f9e4a9d" containerID="a013fff0608be8d166b652444ea748f453afbc290afdd7cdaa470f52e4e25bba" exitCode=0 Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.369376 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerDied","Data":"a013fff0608be8d166b652444ea748f453afbc290afdd7cdaa470f52e4e25bba"} Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.370114 5099 scope.go:117] "RemoveContainer" containerID="a013fff0608be8d166b652444ea748f453afbc290afdd7cdaa470f52e4e25bba" Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.491667 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xddc7"] Jan 21 18:41:59 crc kubenswrapper[5099]: I0121 18:41:59.931947 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d433b03-9cad-429b-b20e-b0e71b410375" path="/var/lib/kubelet/pods/6d433b03-9cad-429b-b20e-b0e71b410375/volumes" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.142813 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483682-4gjbs"] Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.154280 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.156653 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.156908 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.161109 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483682-4gjbs"] Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.161659 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.177726 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8jh\" (UniqueName: \"kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh\") pod \"auto-csr-approver-29483682-4gjbs\" (UID: \"550091f1-5315-4c09-9616-16b34eddef3a\") " pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.218448 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.224268 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.228082 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.232341 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.232641 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.280525 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bktfc\" (UniqueName: \"kubernetes.io/projected/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-kube-api-access-bktfc\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.280661 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.280818 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8jh\" (UniqueName: \"kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh\") pod \"auto-csr-approver-29483682-4gjbs\" (UID: \"550091f1-5315-4c09-9616-16b34eddef3a\") " pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.280849 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-qdr-test-config\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.311705 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8jh\" (UniqueName: \"kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh\") pod \"auto-csr-approver-29483682-4gjbs\" (UID: \"550091f1-5315-4c09-9616-16b34eddef3a\") " pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.379786 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" event={"ID":"07059825-6270-48dd-9737-b401f10d1f1e","Type":"ContainerStarted","Data":"bd3d4b1270ba447a6f1b02b27deff284b10088fa4e9a3f931f2d9af5f1a7ae4a"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.379875 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" event={"ID":"07059825-6270-48dd-9737-b401f10d1f1e","Type":"ContainerStarted","Data":"19bd9aa26e62f43f547a3fc1c05e8bf88a63a702f50aa824c60b1c9e74ae72ed"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.382184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bktfc\" (UniqueName: \"kubernetes.io/projected/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-kube-api-access-bktfc\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.382278 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.382306 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-qdr-test-config\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.383574 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-qdr-test-config\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.388946 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"6640f73a3d6bc334e35f4946fed137429ac0e31e04fe8f1bd0b1aa742fec3a40"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.394214 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerStarted","Data":"e460aa169ab4de1ed77b29bd922fd0ff1cbc18351b43002da0e1f737eb80f31b"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.406523 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bktfc\" (UniqueName: \"kubernetes.io/projected/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-kube-api-access-bktfc\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.411145 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-xddc7" podStartSLOduration=3.411123161 podStartE2EDuration="3.411123161s" podCreationTimestamp="2026-01-21 18:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:42:00.404708663 +0000 UTC m=+1677.818671124" watchObservedRunningTime="2026-01-21 18:42:00.411123161 +0000 UTC m=+1677.825085622" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.412836 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"2baf6b60b37dcddd830d1f8e5b6c37db9ab9a62f2f88495e671353eb26f3ce0e"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.416834 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3b99bf7c-18ed-4371-91d1-e75f1f80ca19-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3b99bf7c-18ed-4371-91d1-e75f1f80ca19\") " pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.431691 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerStarted","Data":"ca914ba679bfb7a2b01052073195302e5067f0ad78a700d3872cbb3d90553486"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.452217 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"68443a97b6192e5df35cf64a3c5e9d52b43b4034be10e5f27fc89f01d7e9e115"} Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.481450 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.556936 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 18:42:00 crc kubenswrapper[5099]: I0121 18:42:00.914717 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:42:00 crc kubenswrapper[5099]: E0121 18:42:00.915682 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.041465 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483682-4gjbs"] Jan 21 18:42:01 crc kubenswrapper[5099]: W0121 18:42:01.063233 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550091f1_5315_4c09_9616_16b34eddef3a.slice/crio-3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26 WatchSource:0}: Error finding container 3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26: Status 404 returned error can't find the container with id 3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.126681 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 18:42:01 crc kubenswrapper[5099]: W0121 18:42:01.131266 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b99bf7c_18ed_4371_91d1_e75f1f80ca19.slice/crio-b86b7d424c373c1f77312ada08b66824ad062160f86a5c1f85520cf7ad83b078 WatchSource:0}: Error finding container b86b7d424c373c1f77312ada08b66824ad062160f86a5c1f85520cf7ad83b078: Status 404 returned error can't find the container with id b86b7d424c373c1f77312ada08b66824ad062160f86a5c1f85520cf7ad83b078 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.463440 5099 generic.go:358] "Generic (PLEG): container finished" podID="0afa2545-4e28-415f-b67f-e1825e024da4" containerID="6640f73a3d6bc334e35f4946fed137429ac0e31e04fe8f1bd0b1aa742fec3a40" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.463572 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerDied","Data":"6640f73a3d6bc334e35f4946fed137429ac0e31e04fe8f1bd0b1aa742fec3a40"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.463623 5099 scope.go:117] "RemoveContainer" containerID="c5af00ec975fde836ac7a6c018585e26fac0f091d9bfd4d24f21125f8556a31c" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.464374 5099 scope.go:117] "RemoveContainer" containerID="6640f73a3d6bc334e35f4946fed137429ac0e31e04fe8f1bd0b1aa742fec3a40" Jan 21 18:42:01 crc kubenswrapper[5099]: E0121 18:42:01.464819 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_service-telemetry(0afa2545-4e28-415f-b67f-e1825e024da4)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" podUID="0afa2545-4e28-415f-b67f-e1825e024da4" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.470458 5099 generic.go:358] "Generic (PLEG): container finished" podID="ed874079-58bf-48a1-8d42-af4769580a43" containerID="e460aa169ab4de1ed77b29bd922fd0ff1cbc18351b43002da0e1f737eb80f31b" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.470565 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerDied","Data":"e460aa169ab4de1ed77b29bd922fd0ff1cbc18351b43002da0e1f737eb80f31b"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.471216 5099 scope.go:117] "RemoveContainer" containerID="e460aa169ab4de1ed77b29bd922fd0ff1cbc18351b43002da0e1f737eb80f31b" Jan 21 18:42:01 crc kubenswrapper[5099]: E0121 18:42:01.471605 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_service-telemetry(ed874079-58bf-48a1-8d42-af4769580a43)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" podUID="ed874079-58bf-48a1-8d42-af4769580a43" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.482615 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"3b99bf7c-18ed-4371-91d1-e75f1f80ca19","Type":"ContainerStarted","Data":"b86b7d424c373c1f77312ada08b66824ad062160f86a5c1f85520cf7ad83b078"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.496721 5099 generic.go:358] "Generic (PLEG): container finished" podID="a82cf411-eed8-4850-9fbf-a0c128c16d13" containerID="2baf6b60b37dcddd830d1f8e5b6c37db9ab9a62f2f88495e671353eb26f3ce0e" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.496919 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerDied","Data":"2baf6b60b37dcddd830d1f8e5b6c37db9ab9a62f2f88495e671353eb26f3ce0e"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.497690 5099 scope.go:117] "RemoveContainer" containerID="2baf6b60b37dcddd830d1f8e5b6c37db9ab9a62f2f88495e671353eb26f3ce0e" Jan 21 18:42:01 crc kubenswrapper[5099]: E0121 18:42:01.498393 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_service-telemetry(a82cf411-eed8-4850-9fbf-a0c128c16d13)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" podUID="a82cf411-eed8-4850-9fbf-a0c128c16d13" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.545135 5099 generic.go:358] "Generic (PLEG): container finished" podID="5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6" containerID="ca914ba679bfb7a2b01052073195302e5067f0ad78a700d3872cbb3d90553486" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.545364 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerDied","Data":"ca914ba679bfb7a2b01052073195302e5067f0ad78a700d3872cbb3d90553486"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.546349 5099 scope.go:117] "RemoveContainer" containerID="bf19162a830888c9067296e380076d066a00c9d760a09345fcac7004c26daac6" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.547002 5099 scope.go:117] "RemoveContainer" containerID="ca914ba679bfb7a2b01052073195302e5067f0ad78a700d3872cbb3d90553486" Jan 21 18:42:01 crc kubenswrapper[5099]: E0121 18:42:01.548076 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_service-telemetry(5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" podUID="5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.591779 5099 generic.go:358] "Generic (PLEG): container finished" podID="201688ad-f074-4dc2-9033-36f09f9e4a9d" containerID="68443a97b6192e5df35cf64a3c5e9d52b43b4034be10e5f27fc89f01d7e9e115" exitCode=0 Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.591952 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerDied","Data":"68443a97b6192e5df35cf64a3c5e9d52b43b4034be10e5f27fc89f01d7e9e115"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.592933 5099 scope.go:117] "RemoveContainer" containerID="68443a97b6192e5df35cf64a3c5e9d52b43b4034be10e5f27fc89f01d7e9e115" Jan 21 18:42:01 crc kubenswrapper[5099]: E0121 18:42:01.593280 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_service-telemetry(201688ad-f074-4dc2-9033-36f09f9e4a9d)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" podUID="201688ad-f074-4dc2-9033-36f09f9e4a9d" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.603565 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" event={"ID":"550091f1-5315-4c09-9616-16b34eddef3a","Type":"ContainerStarted","Data":"3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26"} Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.616193 5099 scope.go:117] "RemoveContainer" containerID="eed5b9d69d8b505a509318a4374d1164ec4b71265cc4df15447c90487bd2c74d" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.657000 5099 scope.go:117] "RemoveContainer" containerID="5ce483aad0dda1b07978cf6d946c1900fa1a7f43737de833c578f7e7b8c62e86" Jan 21 18:42:01 crc kubenswrapper[5099]: I0121 18:42:01.695114 5099 scope.go:117] "RemoveContainer" containerID="a013fff0608be8d166b652444ea748f453afbc290afdd7cdaa470f52e4e25bba" Jan 21 18:42:02 crc kubenswrapper[5099]: I0121 18:42:02.623868 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" event={"ID":"550091f1-5315-4c09-9616-16b34eddef3a","Type":"ContainerStarted","Data":"57853aa65de70aa4cf974fd92c6dacc6fde271a4e3fe38fde4ce0a19f197a54a"} Jan 21 18:42:02 crc kubenswrapper[5099]: I0121 18:42:02.648764 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" podStartSLOduration=1.7150228520000002 podStartE2EDuration="2.648713441s" podCreationTimestamp="2026-01-21 18:42:00 +0000 UTC" firstStartedPulling="2026-01-21 18:42:01.065354842 +0000 UTC m=+1678.479317303" lastFinishedPulling="2026-01-21 18:42:01.999045431 +0000 UTC m=+1679.413007892" observedRunningTime="2026-01-21 18:42:02.640431587 +0000 UTC m=+1680.054394068" watchObservedRunningTime="2026-01-21 18:42:02.648713441 +0000 UTC m=+1680.062675912" Jan 21 18:42:03 crc kubenswrapper[5099]: I0121 18:42:03.667903 5099 generic.go:358] "Generic (PLEG): container finished" podID="550091f1-5315-4c09-9616-16b34eddef3a" containerID="57853aa65de70aa4cf974fd92c6dacc6fde271a4e3fe38fde4ce0a19f197a54a" exitCode=0 Jan 21 18:42:03 crc kubenswrapper[5099]: I0121 18:42:03.668385 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" event={"ID":"550091f1-5315-4c09-9616-16b34eddef3a","Type":"ContainerDied","Data":"57853aa65de70aa4cf974fd92c6dacc6fde271a4e3fe38fde4ce0a19f197a54a"} Jan 21 18:42:04 crc kubenswrapper[5099]: I0121 18:42:04.997433 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.061368 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj8jh\" (UniqueName: \"kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh\") pod \"550091f1-5315-4c09-9616-16b34eddef3a\" (UID: \"550091f1-5315-4c09-9616-16b34eddef3a\") " Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.079322 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh" (OuterVolumeSpecName: "kube-api-access-kj8jh") pod "550091f1-5315-4c09-9616-16b34eddef3a" (UID: "550091f1-5315-4c09-9616-16b34eddef3a"). InnerVolumeSpecName "kube-api-access-kj8jh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.163425 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kj8jh\" (UniqueName: \"kubernetes.io/projected/550091f1-5315-4c09-9616-16b34eddef3a-kube-api-access-kj8jh\") on node \"crc\" DevicePath \"\"" Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.719379 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" event={"ID":"550091f1-5315-4c09-9616-16b34eddef3a","Type":"ContainerDied","Data":"3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26"} Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.719436 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca77bba06d441c43681d6f014fd619f032ce41de9e05394f75a1471bb7add26" Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.719527 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483682-4gjbs" Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.748269 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483676-9pvxk"] Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.759064 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483676-9pvxk"] Jan 21 18:42:05 crc kubenswrapper[5099]: I0121 18:42:05.926969 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7616d8d-d2f4-463e-a174-b133d0fdbac9" path="/var/lib/kubelet/pods/d7616d8d-d2f4-463e-a174-b133d0fdbac9/volumes" Jan 21 18:42:12 crc kubenswrapper[5099]: I0121 18:42:12.782967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"3b99bf7c-18ed-4371-91d1-e75f1f80ca19","Type":"ContainerStarted","Data":"da3ae7941b8d9ac1aeeba9cdadc4fe4f4eaccad57c1df3f06081b9ee99edf26d"} Jan 21 18:42:12 crc kubenswrapper[5099]: I0121 18:42:12.805915 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.959914887 podStartE2EDuration="12.805704299s" podCreationTimestamp="2026-01-21 18:42:00 +0000 UTC" firstStartedPulling="2026-01-21 18:42:01.134325835 +0000 UTC m=+1678.548288296" lastFinishedPulling="2026-01-21 18:42:11.980115247 +0000 UTC m=+1689.394077708" observedRunningTime="2026-01-21 18:42:12.80039049 +0000 UTC m=+1690.214352951" watchObservedRunningTime="2026-01-21 18:42:12.805704299 +0000 UTC m=+1690.219666780" Jan 21 18:42:12 crc kubenswrapper[5099]: I0121 18:42:12.916782 5099 scope.go:117] "RemoveContainer" containerID="e460aa169ab4de1ed77b29bd922fd0ff1cbc18351b43002da0e1f737eb80f31b" Jan 21 18:42:12 crc kubenswrapper[5099]: I0121 18:42:12.917122 5099 scope.go:117] "RemoveContainer" containerID="2baf6b60b37dcddd830d1f8e5b6c37db9ab9a62f2f88495e671353eb26f3ce0e" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.136156 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-l9hjv"] Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.138109 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="550091f1-5315-4c09-9616-16b34eddef3a" containerName="oc" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.138936 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="550091f1-5315-4c09-9616-16b34eddef3a" containerName="oc" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.139813 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="550091f1-5315-4c09-9616-16b34eddef3a" containerName="oc" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.148129 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.152263 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.153593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.153846 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.154112 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.155972 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.156252 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.160233 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-l9hjv"] Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.250384 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvpbt\" (UniqueName: \"kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.250461 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.250508 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.250592 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.251889 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.252026 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.252140 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.354389 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355379 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355487 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355597 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355713 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvpbt\" (UniqueName: \"kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355824 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355939 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.355968 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.356434 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.356671 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.356876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.357113 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.357165 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.400079 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvpbt\" (UniqueName: \"kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt\") pod \"stf-smoketest-smoke1-l9hjv\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.499009 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.651639 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.658793 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.665818 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh999\" (UniqueName: \"kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999\") pod \"curl\" (UID: \"2b184571-3b36-4cc9-9494-e673a05d23a2\") " pod="service-telemetry/curl" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.703867 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.772369 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kh999\" (UniqueName: \"kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999\") pod \"curl\" (UID: \"2b184571-3b36-4cc9-9494-e673a05d23a2\") " pod="service-telemetry/curl" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.811553 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-646c885c84-r2p27" event={"ID":"ed874079-58bf-48a1-8d42-af4769580a43","Type":"ContainerStarted","Data":"c85d286deeee4c5666765e005ef2004300f05f7e9401c825f6622133383dacc3"} Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.818542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh999\" (UniqueName: \"kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999\") pod \"curl\" (UID: \"2b184571-3b36-4cc9-9494-e673a05d23a2\") " pod="service-telemetry/curl" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.826875 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq" event={"ID":"a82cf411-eed8-4850-9fbf-a0c128c16d13","Type":"ContainerStarted","Data":"71bcf8dbd356cc45096f5932cda09c1c5778ce2a82712d9406dc216defb1c98e"} Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.915143 5099 scope.go:117] "RemoveContainer" containerID="ca914ba679bfb7a2b01052073195302e5067f0ad78a700d3872cbb3d90553486" Jan 21 18:42:13 crc kubenswrapper[5099]: I0121 18:42:13.916987 5099 scope.go:117] "RemoveContainer" containerID="6640f73a3d6bc334e35f4946fed137429ac0e31e04fe8f1bd0b1aa742fec3a40" Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.025143 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.089531 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-l9hjv"] Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.358917 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 18:42:14 crc kubenswrapper[5099]: W0121 18:42:14.367634 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b184571_3b36_4cc9_9494_e673a05d23a2.slice/crio-78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af WatchSource:0}: Error finding container 78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af: Status 404 returned error can't find the container with id 78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.837977 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"2b184571-3b36-4cc9-9494-e673a05d23a2","Type":"ContainerStarted","Data":"78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af"} Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.841495 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk" event={"ID":"0afa2545-4e28-415f-b67f-e1825e024da4","Type":"ContainerStarted","Data":"03054a7d7703a94386c1bc81514f27824d5af2033505f11065d85d1e2defb684"} Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.847637 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf" event={"ID":"5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6","Type":"ContainerStarted","Data":"b6c379166a9fdb42d157d302bbee9dde3de56d55b4af6873f11a0fe3f7ffaf0a"} Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.880280 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerStarted","Data":"b155008e8571fed2032e932a9df7e843551a32da07905f61113f8ed1d1043268"} Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.913702 5099 scope.go:117] "RemoveContainer" containerID="68443a97b6192e5df35cf64a3c5e9d52b43b4034be10e5f27fc89f01d7e9e115" Jan 21 18:42:14 crc kubenswrapper[5099]: I0121 18:42:14.916031 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:42:14 crc kubenswrapper[5099]: E0121 18:42:14.916254 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:42:15 crc kubenswrapper[5099]: I0121 18:42:15.902989 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv" event={"ID":"201688ad-f074-4dc2-9033-36f09f9e4a9d","Type":"ContainerStarted","Data":"ca049fae6e62fa8acb2bc2b10236c6532a660e9e131bbefcd86019a377848261"} Jan 21 18:42:17 crc kubenswrapper[5099]: I0121 18:42:17.874850 5099 scope.go:117] "RemoveContainer" containerID="251b96961c77e91054a6f8ad46f7c63a02622fd717ecd390ee019c39b74f42bd" Jan 21 18:42:17 crc kubenswrapper[5099]: I0121 18:42:17.939857 5099 generic.go:358] "Generic (PLEG): container finished" podID="2b184571-3b36-4cc9-9494-e673a05d23a2" containerID="663d36b82b341bab7ddb832a53d8feaabbb9d9a16b2488cc74ccb4fb9a14454d" exitCode=0 Jan 21 18:42:17 crc kubenswrapper[5099]: I0121 18:42:17.940117 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"2b184571-3b36-4cc9-9494-e673a05d23a2","Type":"ContainerDied","Data":"663d36b82b341bab7ddb832a53d8feaabbb9d9a16b2488cc74ccb4fb9a14454d"} Jan 21 18:42:26 crc kubenswrapper[5099]: I0121 18:42:26.913957 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:42:26 crc kubenswrapper[5099]: E0121 18:42:26.915510 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:42:31 crc kubenswrapper[5099]: I0121 18:42:31.742859 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 18:42:31 crc kubenswrapper[5099]: I0121 18:42:31.798249 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh999\" (UniqueName: \"kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999\") pod \"2b184571-3b36-4cc9-9494-e673a05d23a2\" (UID: \"2b184571-3b36-4cc9-9494-e673a05d23a2\") " Jan 21 18:42:31 crc kubenswrapper[5099]: I0121 18:42:31.803108 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999" (OuterVolumeSpecName: "kube-api-access-kh999") pod "2b184571-3b36-4cc9-9494-e673a05d23a2" (UID: "2b184571-3b36-4cc9-9494-e673a05d23a2"). InnerVolumeSpecName "kube-api-access-kh999". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:42:31 crc kubenswrapper[5099]: I0121 18:42:31.900278 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kh999\" (UniqueName: \"kubernetes.io/projected/2b184571-3b36-4cc9-9494-e673a05d23a2-kube-api-access-kh999\") on node \"crc\" DevicePath \"\"" Jan 21 18:42:31 crc kubenswrapper[5099]: I0121 18:42:31.921619 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_2b184571-3b36-4cc9-9494-e673a05d23a2/curl/0.log" Jan 21 18:42:32 crc kubenswrapper[5099]: I0121 18:42:32.065783 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"2b184571-3b36-4cc9-9494-e673a05d23a2","Type":"ContainerDied","Data":"78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af"} Jan 21 18:42:32 crc kubenswrapper[5099]: I0121 18:42:32.065844 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78292e6a3a9f6405cea3da251ddb6e576801f052755b323a4a856ef66bfcd0af" Jan 21 18:42:32 crc kubenswrapper[5099]: I0121 18:42:32.065932 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 18:42:32 crc kubenswrapper[5099]: I0121 18:42:32.070517 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerStarted","Data":"e5184cd4c30d40d31fe3429690d34be19fb5566665055e8e2421d36184cf1766"} Jan 21 18:42:32 crc kubenswrapper[5099]: I0121 18:42:32.190455 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-rk86p_d63f0418-bf6c-4a0a-8b72-8fa1215358c0/prometheus-webhook-snmp/0.log" Jan 21 18:42:38 crc kubenswrapper[5099]: I0121 18:42:38.153403 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerStarted","Data":"256e6778a50f88e49ac7ff7dac32ce1c44cca3582a3ca160be9e808c2a9d1d1e"} Jan 21 18:42:38 crc kubenswrapper[5099]: I0121 18:42:38.183027 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" podStartSLOduration=1.9480335210000002 podStartE2EDuration="25.183004535s" podCreationTimestamp="2026-01-21 18:42:13 +0000 UTC" firstStartedPulling="2026-01-21 18:42:14.134634387 +0000 UTC m=+1691.548596858" lastFinishedPulling="2026-01-21 18:42:37.369605411 +0000 UTC m=+1714.783567872" observedRunningTime="2026-01-21 18:42:38.174887827 +0000 UTC m=+1715.588850308" watchObservedRunningTime="2026-01-21 18:42:38.183004535 +0000 UTC m=+1715.596966986" Jan 21 18:42:40 crc kubenswrapper[5099]: I0121 18:42:40.914585 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:42:40 crc kubenswrapper[5099]: E0121 18:42:40.915354 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:42:51 crc kubenswrapper[5099]: I0121 18:42:51.914299 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:42:51 crc kubenswrapper[5099]: E0121 18:42:51.915294 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:02 crc kubenswrapper[5099]: I0121 18:43:02.425087 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-rk86p_d63f0418-bf6c-4a0a-8b72-8fa1215358c0/prometheus-webhook-snmp/0.log" Jan 21 18:43:03 crc kubenswrapper[5099]: I0121 18:43:03.931659 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:43:03 crc kubenswrapper[5099]: E0121 18:43:03.933590 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:06 crc kubenswrapper[5099]: I0121 18:43:06.433393 5099 generic.go:358] "Generic (PLEG): container finished" podID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerID="e5184cd4c30d40d31fe3429690d34be19fb5566665055e8e2421d36184cf1766" exitCode=1 Jan 21 18:43:06 crc kubenswrapper[5099]: I0121 18:43:06.433570 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerDied","Data":"e5184cd4c30d40d31fe3429690d34be19fb5566665055e8e2421d36184cf1766"} Jan 21 18:43:06 crc kubenswrapper[5099]: I0121 18:43:06.434671 5099 scope.go:117] "RemoveContainer" containerID="e5184cd4c30d40d31fe3429690d34be19fb5566665055e8e2421d36184cf1766" Jan 21 18:43:10 crc kubenswrapper[5099]: I0121 18:43:10.475187 5099 generic.go:358] "Generic (PLEG): container finished" podID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerID="256e6778a50f88e49ac7ff7dac32ce1c44cca3582a3ca160be9e808c2a9d1d1e" exitCode=0 Jan 21 18:43:10 crc kubenswrapper[5099]: I0121 18:43:10.475311 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerDied","Data":"256e6778a50f88e49ac7ff7dac32ce1c44cca3582a3ca160be9e808c2a9d1d1e"} Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.787207 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889465 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889588 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889642 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889692 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889816 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvpbt\" (UniqueName: \"kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889873 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.889942 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config\") pod \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\" (UID: \"b63ebd7b-dd5d-4649-9236-253b8c930ef9\") " Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.896925 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt" (OuterVolumeSpecName: "kube-api-access-qvpbt") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "kube-api-access-qvpbt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.909822 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.909882 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.911288 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.911722 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.918662 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.929763 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "b63ebd7b-dd5d-4649-9236-253b8c930ef9" (UID: "b63ebd7b-dd5d-4649-9236-253b8c930ef9"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991550 5099 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991592 5099 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991604 5099 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991614 5099 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991626 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvpbt\" (UniqueName: \"kubernetes.io/projected/b63ebd7b-dd5d-4649-9236-253b8c930ef9-kube-api-access-qvpbt\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991637 5099 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:11 crc kubenswrapper[5099]: I0121 18:43:11.991652 5099 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b63ebd7b-dd5d-4649-9236-253b8c930ef9-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:12 crc kubenswrapper[5099]: I0121 18:43:12.500668 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" event={"ID":"b63ebd7b-dd5d-4649-9236-253b8c930ef9","Type":"ContainerDied","Data":"b155008e8571fed2032e932a9df7e843551a32da07905f61113f8ed1d1043268"} Jan 21 18:43:12 crc kubenswrapper[5099]: I0121 18:43:12.501415 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b155008e8571fed2032e932a9df7e843551a32da07905f61113f8ed1d1043268" Jan 21 18:43:12 crc kubenswrapper[5099]: I0121 18:43:12.500689 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-l9hjv" Jan 21 18:43:18 crc kubenswrapper[5099]: I0121 18:43:18.914166 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:43:18 crc kubenswrapper[5099]: E0121 18:43:18.916069 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.029071 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8md8w"] Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030299 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-collectd" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030327 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-collectd" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030362 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-ceilometer" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030369 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-ceilometer" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030382 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b184571-3b36-4cc9-9494-e673a05d23a2" containerName="curl" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030388 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b184571-3b36-4cc9-9494-e673a05d23a2" containerName="curl" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030522 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-ceilometer" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030538 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b63ebd7b-dd5d-4649-9236-253b8c930ef9" containerName="smoketest-collectd" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.030547 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="2b184571-3b36-4cc9-9494-e673a05d23a2" containerName="curl" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.035073 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.039263 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.039424 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.041626 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.041836 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.042009 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.042009 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.045096 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8md8w"] Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.123582 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124316 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124352 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124370 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124403 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzr7h\" (UniqueName: \"kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124430 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.124764 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.226975 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227056 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227243 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227432 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nzr7h\" (UniqueName: \"kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227511 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227629 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.227840 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.228491 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.228520 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.228798 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.228926 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.229320 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.229490 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.273171 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzr7h\" (UniqueName: \"kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h\") pod \"stf-smoketest-smoke1-8md8w\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.359251 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:19 crc kubenswrapper[5099]: I0121 18:43:19.667890 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8md8w"] Jan 21 18:43:20 crc kubenswrapper[5099]: I0121 18:43:20.592719 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerStarted","Data":"405e9c3e9fb39660fa05f6d586924b070bbc4a4749d7195468ef2cf88a7f205b"} Jan 21 18:43:20 crc kubenswrapper[5099]: I0121 18:43:20.593408 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerStarted","Data":"5864bdce765063147f428c565c779e2a6a79b00b455e98988733633f1a5ef6a6"} Jan 21 18:43:20 crc kubenswrapper[5099]: I0121 18:43:20.593438 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerStarted","Data":"7b50bf31285fce3e9a1c5e44f9abf1595ddddd2ec856c16b98f1d32e2a82ab45"} Jan 21 18:43:20 crc kubenswrapper[5099]: I0121 18:43:20.626452 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-8md8w" podStartSLOduration=1.6264131320000002 podStartE2EDuration="1.626413132s" podCreationTimestamp="2026-01-21 18:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 18:43:20.617302329 +0000 UTC m=+1758.031264820" watchObservedRunningTime="2026-01-21 18:43:20.626413132 +0000 UTC m=+1758.040375643" Jan 21 18:43:29 crc kubenswrapper[5099]: I0121 18:43:29.916244 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:43:29 crc kubenswrapper[5099]: E0121 18:43:29.917478 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:40 crc kubenswrapper[5099]: I0121 18:43:40.914045 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:43:40 crc kubenswrapper[5099]: E0121 18:43:40.915501 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:52 crc kubenswrapper[5099]: I0121 18:43:52.889421 5099 generic.go:358] "Generic (PLEG): container finished" podID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerID="405e9c3e9fb39660fa05f6d586924b070bbc4a4749d7195468ef2cf88a7f205b" exitCode=0 Jan 21 18:43:52 crc kubenswrapper[5099]: I0121 18:43:52.889514 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerDied","Data":"405e9c3e9fb39660fa05f6d586924b070bbc4a4749d7195468ef2cf88a7f205b"} Jan 21 18:43:52 crc kubenswrapper[5099]: I0121 18:43:52.891409 5099 scope.go:117] "RemoveContainer" containerID="405e9c3e9fb39660fa05f6d586924b070bbc4a4749d7195468ef2cf88a7f205b" Jan 21 18:43:53 crc kubenswrapper[5099]: I0121 18:43:53.902346 5099 generic.go:358] "Generic (PLEG): container finished" podID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerID="5864bdce765063147f428c565c779e2a6a79b00b455e98988733633f1a5ef6a6" exitCode=0 Jan 21 18:43:53 crc kubenswrapper[5099]: I0121 18:43:53.902431 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerDied","Data":"5864bdce765063147f428c565c779e2a6a79b00b455e98988733633f1a5ef6a6"} Jan 21 18:43:53 crc kubenswrapper[5099]: I0121 18:43:53.914312 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:43:53 crc kubenswrapper[5099]: E0121 18:43:53.914800 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.202399 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.279864 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.279974 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.280007 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.280036 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.280059 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.280183 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzr7h\" (UniqueName: \"kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.280350 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config\") pod \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\" (UID: \"5d639bb9-54bd-489e-b9ef-7fa15d6649d0\") " Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.289243 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h" (OuterVolumeSpecName: "kube-api-access-nzr7h") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "kube-api-access-nzr7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.300794 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.301581 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.303546 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.305301 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.305544 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.305604 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "5d639bb9-54bd-489e-b9ef-7fa15d6649d0" (UID: "5d639bb9-54bd-489e-b9ef-7fa15d6649d0"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383053 5099 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383105 5099 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383127 5099 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383139 5099 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383148 5099 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383157 5099 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.383166 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nzr7h\" (UniqueName: \"kubernetes.io/projected/5d639bb9-54bd-489e-b9ef-7fa15d6649d0-kube-api-access-nzr7h\") on node \"crc\" DevicePath \"\"" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.921482 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8md8w" Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.931940 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8md8w" event={"ID":"5d639bb9-54bd-489e-b9ef-7fa15d6649d0","Type":"ContainerDied","Data":"7b50bf31285fce3e9a1c5e44f9abf1595ddddd2ec856c16b98f1d32e2a82ab45"} Jan 21 18:43:55 crc kubenswrapper[5099]: I0121 18:43:55.932371 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b50bf31285fce3e9a1c5e44f9abf1595ddddd2ec856c16b98f1d32e2a82ab45" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.353477 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8md8w_5d639bb9-54bd-489e-b9ef-7fa15d6649d0/smoketest-collectd/0.log" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.658470 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8md8w_5d639bb9-54bd-489e-b9ef-7fa15d6649d0/smoketest-ceilometer/0.log" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.958099 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959026 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-collectd" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959051 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-collectd" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959073 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-ceilometer" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959082 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-ceilometer" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959226 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-ceilometer" Jan 21 18:43:57 crc kubenswrapper[5099]: I0121 18:43:57.959251 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d639bb9-54bd-489e-b9ef-7fa15d6649d0" containerName="smoketest-collectd" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.164703 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-xddc7_07059825-6270-48dd-9737-b401f10d1f1e/default-interconnect/0.log" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.403308 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.403497 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.443295 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/bridge/2.log" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.536832 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.536902 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.537043 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8tp\" (UniqueName: \"kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.639271 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8tp\" (UniqueName: \"kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.640103 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.640160 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.640928 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.640946 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.666284 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8tp\" (UniqueName: \"kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp\") pod \"redhat-operators-lsqp2\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.724746 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:43:58 crc kubenswrapper[5099]: I0121 18:43:58.731095 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/sg-core/0.log" Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.274151 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6/bridge/2.log" Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.276673 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.564339 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6/sg-core/0.log" Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.895043 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/bridge/2.log" Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.978963 5099 generic.go:358] "Generic (PLEG): container finished" podID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerID="a499a1ebe73a282807979074bd4d17389c2fe6b2bc41b53803c5cc57e8af6f61" exitCode=0 Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.979090 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerDied","Data":"a499a1ebe73a282807979074bd4d17389c2fe6b2bc41b53803c5cc57e8af6f61"} Jan 21 18:43:59 crc kubenswrapper[5099]: I0121 18:43:59.979170 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerStarted","Data":"1fb0274d2ac7f99060638468b0b2a51b0f3cb552ce28f49e9958ff7afe8f5afb"} Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.141317 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483684-xbtgm"] Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.147102 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.168080 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.173188 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.173591 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.174005 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483684-xbtgm"] Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.188433 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/sg-core/0.log" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.280788 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv599\" (UniqueName: \"kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599\") pod \"auto-csr-approver-29483684-xbtgm\" (UID: \"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270\") " pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.382806 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vv599\" (UniqueName: \"kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599\") pod \"auto-csr-approver-29483684-xbtgm\" (UID: \"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270\") " pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.407454 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv599\" (UniqueName: \"kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599\") pod \"auto-csr-approver-29483684-xbtgm\" (UID: \"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270\") " pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.462003 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_ed874079-58bf-48a1-8d42-af4769580a43/bridge/2.log" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.491598 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.734523 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483684-xbtgm"] Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.751329 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_ed874079-58bf-48a1-8d42-af4769580a43/sg-core/0.log" Jan 21 18:44:00 crc kubenswrapper[5099]: I0121 18:44:00.989587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" event={"ID":"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270","Type":"ContainerStarted","Data":"93871b01386f5b4db73feddc4681220bc2b0e7f28196455acc06edb84556a420"} Jan 21 18:44:01 crc kubenswrapper[5099]: I0121 18:44:01.014931 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/bridge/2.log" Jan 21 18:44:01 crc kubenswrapper[5099]: I0121 18:44:01.304007 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/sg-core/0.log" Jan 21 18:44:02 crc kubenswrapper[5099]: I0121 18:44:02.000955 5099 generic.go:358] "Generic (PLEG): container finished" podID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerID="1f0aa0b5ba8466a8aace91a77d7f5b4a32f2aa1a7dc6066326d798313df41430" exitCode=0 Jan 21 18:44:02 crc kubenswrapper[5099]: I0121 18:44:02.001461 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerDied","Data":"1f0aa0b5ba8466a8aace91a77d7f5b4a32f2aa1a7dc6066326d798313df41430"} Jan 21 18:44:03 crc kubenswrapper[5099]: I0121 18:44:03.030991 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerStarted","Data":"0633d7108214fed512e95506cbe5544f4c337826e6282d3343853bcf19456e58"} Jan 21 18:44:03 crc kubenswrapper[5099]: I0121 18:44:03.034630 5099 generic.go:358] "Generic (PLEG): container finished" podID="0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" containerID="3a5b1434bcf08e3b9c5a22f7d965d44e5f741a500f3d326eab1b666fb4679d7e" exitCode=0 Jan 21 18:44:03 crc kubenswrapper[5099]: I0121 18:44:03.035190 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" event={"ID":"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270","Type":"ContainerDied","Data":"3a5b1434bcf08e3b9c5a22f7d965d44e5f741a500f3d326eab1b666fb4679d7e"} Jan 21 18:44:03 crc kubenswrapper[5099]: I0121 18:44:03.054114 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lsqp2" podStartSLOduration=5.083671141 podStartE2EDuration="6.054078144s" podCreationTimestamp="2026-01-21 18:43:57 +0000 UTC" firstStartedPulling="2026-01-21 18:43:59.98022374 +0000 UTC m=+1797.394186201" lastFinishedPulling="2026-01-21 18:44:00.950630713 +0000 UTC m=+1798.364593204" observedRunningTime="2026-01-21 18:44:03.048510968 +0000 UTC m=+1800.462473429" watchObservedRunningTime="2026-01-21 18:44:03.054078144 +0000 UTC m=+1800.468040605" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.347126 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.449411 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv599\" (UniqueName: \"kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599\") pod \"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270\" (UID: \"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270\") " Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.460340 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599" (OuterVolumeSpecName: "kube-api-access-vv599") pod "0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" (UID: "0c5a191e-9a0f-46b8-ac2e-c1804f5ee270"). InnerVolumeSpecName "kube-api-access-vv599". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.552020 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vv599\" (UniqueName: \"kubernetes.io/projected/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270-kube-api-access-vv599\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.684511 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.684579 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.694703 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:44:04 crc kubenswrapper[5099]: I0121 18:44:04.694994 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.026460 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d44c8fc9-hzgpv_bfae1586-4cb9-4058-a0e1-151a2e3b5ad7/operator/0.log" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.056049 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" event={"ID":"0c5a191e-9a0f-46b8-ac2e-c1804f5ee270","Type":"ContainerDied","Data":"93871b01386f5b4db73feddc4681220bc2b0e7f28196455acc06edb84556a420"} Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.056087 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483684-xbtgm" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.056167 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93871b01386f5b4db73feddc4681220bc2b0e7f28196455acc06edb84556a420" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.371877 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60afeeee-13e5-4557-8409-391a5ae528c8/prometheus/0.log" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.422993 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483678-2b5zf"] Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.430272 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483678-2b5zf"] Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.692943 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_67bd99a7-8bd7-4673-a648-c41eee407194/elasticsearch/0.log" Jan 21 18:44:05 crc kubenswrapper[5099]: I0121 18:44:05.923484 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e537d8d-c124-46c8-a883-5a57e785095f" path="/var/lib/kubelet/pods/9e537d8d-c124-46c8-a883-5a57e785095f/volumes" Jan 21 18:44:06 crc kubenswrapper[5099]: I0121 18:44:06.009367 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-rk86p_d63f0418-bf6c-4a0a-8b72-8fa1215358c0/prometheus-webhook-snmp/0.log" Jan 21 18:44:06 crc kubenswrapper[5099]: I0121 18:44:06.312034 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe/alertmanager/0.log" Jan 21 18:44:07 crc kubenswrapper[5099]: I0121 18:44:07.914036 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:44:07 crc kubenswrapper[5099]: E0121 18:44:07.915772 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:44:08 crc kubenswrapper[5099]: I0121 18:44:08.725041 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:08 crc kubenswrapper[5099]: I0121 18:44:08.725811 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:08 crc kubenswrapper[5099]: I0121 18:44:08.778039 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:09 crc kubenswrapper[5099]: I0121 18:44:09.581772 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:11 crc kubenswrapper[5099]: I0121 18:44:11.951360 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:44:12 crc kubenswrapper[5099]: I0121 18:44:12.536821 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lsqp2" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="registry-server" containerID="cri-o://0633d7108214fed512e95506cbe5544f4c337826e6282d3343853bcf19456e58" gracePeriod=2 Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.561013 5099 generic.go:358] "Generic (PLEG): container finished" podID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerID="0633d7108214fed512e95506cbe5544f4c337826e6282d3343853bcf19456e58" exitCode=0 Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.561118 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerDied","Data":"0633d7108214fed512e95506cbe5544f4c337826e6282d3343853bcf19456e58"} Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.761639 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.830062 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd8tp\" (UniqueName: \"kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp\") pod \"b68768e7-7e4f-4a67-b664-f7371a7d7037\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.830470 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content\") pod \"b68768e7-7e4f-4a67-b664-f7371a7d7037\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.830672 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities\") pod \"b68768e7-7e4f-4a67-b664-f7371a7d7037\" (UID: \"b68768e7-7e4f-4a67-b664-f7371a7d7037\") " Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.832081 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities" (OuterVolumeSpecName: "utilities") pod "b68768e7-7e4f-4a67-b664-f7371a7d7037" (UID: "b68768e7-7e4f-4a67-b664-f7371a7d7037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.840389 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp" (OuterVolumeSpecName: "kube-api-access-nd8tp") pod "b68768e7-7e4f-4a67-b664-f7371a7d7037" (UID: "b68768e7-7e4f-4a67-b664-f7371a7d7037"). InnerVolumeSpecName "kube-api-access-nd8tp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.933331 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nd8tp\" (UniqueName: \"kubernetes.io/projected/b68768e7-7e4f-4a67-b664-f7371a7d7037-kube-api-access-nd8tp\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.933400 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:14 crc kubenswrapper[5099]: I0121 18:44:14.948108 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b68768e7-7e4f-4a67-b664-f7371a7d7037" (UID: "b68768e7-7e4f-4a67-b664-f7371a7d7037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.034826 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68768e7-7e4f-4a67-b664-f7371a7d7037-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.572301 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsqp2" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.572292 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsqp2" event={"ID":"b68768e7-7e4f-4a67-b664-f7371a7d7037","Type":"ContainerDied","Data":"1fb0274d2ac7f99060638468b0b2a51b0f3cb552ce28f49e9958ff7afe8f5afb"} Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.573084 5099 scope.go:117] "RemoveContainer" containerID="0633d7108214fed512e95506cbe5544f4c337826e6282d3343853bcf19456e58" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.610956 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.617478 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lsqp2"] Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.619644 5099 scope.go:117] "RemoveContainer" containerID="1f0aa0b5ba8466a8aace91a77d7f5b4a32f2aa1a7dc6066326d798313df41430" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.641810 5099 scope.go:117] "RemoveContainer" containerID="a499a1ebe73a282807979074bd4d17389c2fe6b2bc41b53803c5cc57e8af6f61" Jan 21 18:44:15 crc kubenswrapper[5099]: I0121 18:44:15.922865 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" path="/var/lib/kubelet/pods/b68768e7-7e4f-4a67-b664-f7371a7d7037/volumes" Jan 21 18:44:18 crc kubenswrapper[5099]: I0121 18:44:18.066072 5099 scope.go:117] "RemoveContainer" containerID="e5bec5d558bdce8166ad50eb7e3d4a32b802dd29d871e6d34a04b06540f52c88" Jan 21 18:44:22 crc kubenswrapper[5099]: I0121 18:44:22.914309 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:44:22 crc kubenswrapper[5099]: E0121 18:44:22.915268 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:44:23 crc kubenswrapper[5099]: I0121 18:44:23.276982 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7d4d5cb5f7-p4dpk_00721a47-1d2e-4b1f-8379-74e69855906d/operator/0.log" Jan 21 18:44:27 crc kubenswrapper[5099]: I0121 18:44:27.482280 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d44c8fc9-hzgpv_bfae1586-4cb9-4058-a0e1-151a2e3b5ad7/operator/0.log" Jan 21 18:44:27 crc kubenswrapper[5099]: I0121 18:44:27.779408 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_3b99bf7c-18ed-4371-91d1-e75f1f80ca19/qdr/0.log" Jan 21 18:44:35 crc kubenswrapper[5099]: I0121 18:44:35.913790 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:44:35 crc kubenswrapper[5099]: E0121 18:44:35.916077 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:44:46 crc kubenswrapper[5099]: I0121 18:44:46.913835 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:44:46 crc kubenswrapper[5099]: E0121 18:44:46.914975 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:44:58 crc kubenswrapper[5099]: I0121 18:44:58.913890 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:44:58 crc kubenswrapper[5099]: E0121 18:44:58.915267 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.150920 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l"] Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152654 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="extract-utilities" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152716 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="extract-utilities" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152791 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" containerName="oc" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152802 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" containerName="oc" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152825 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152860 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152891 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="extract-content" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.152899 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="extract-content" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.153188 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b68768e7-7e4f-4a67-b664-f7371a7d7037" containerName="registry-server" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.153214 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" containerName="oc" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.158586 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.163304 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.163757 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.164452 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l"] Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.244476 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.244553 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbqrs\" (UniqueName: \"kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.244643 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.346855 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.347012 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.347051 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbqrs\" (UniqueName: \"kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.348514 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.355979 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.369474 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbqrs\" (UniqueName: \"kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs\") pod \"collect-profiles-29483685-hp84l\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.517758 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.964358 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l"] Jan 21 18:45:00 crc kubenswrapper[5099]: I0121 18:45:00.998559 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" event={"ID":"b391ed19-3c37-4895-8b2d-d097e67c01ba","Type":"ContainerStarted","Data":"2762ea4c33a551202ec0aaa36f334eaf3bf7041acfcff142a821bdda219b25a8"} Jan 21 18:45:02 crc kubenswrapper[5099]: I0121 18:45:02.012414 5099 generic.go:358] "Generic (PLEG): container finished" podID="b391ed19-3c37-4895-8b2d-d097e67c01ba" containerID="fec06969ffdf64f158602295d70afb99a4d212c44e05c93be38ccbbe2d6b0239" exitCode=0 Jan 21 18:45:02 crc kubenswrapper[5099]: I0121 18:45:02.012508 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" event={"ID":"b391ed19-3c37-4895-8b2d-d097e67c01ba","Type":"ContainerDied","Data":"fec06969ffdf64f158602295d70afb99a4d212c44e05c93be38ccbbe2d6b0239"} Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.305833 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l4bc9/must-gather-wxf48"] Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.313547 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.318059 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-l4bc9\"/\"openshift-service-ca.crt\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.318094 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-l4bc9\"/\"kube-root-ca.crt\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.318174 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-l4bc9\"/\"default-dockercfg-bwxpt\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.323066 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.332130 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l4bc9/must-gather-wxf48"] Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.405896 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbqrs\" (UniqueName: \"kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs\") pod \"b391ed19-3c37-4895-8b2d-d097e67c01ba\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.406084 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume\") pod \"b391ed19-3c37-4895-8b2d-d097e67c01ba\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.406150 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume\") pod \"b391ed19-3c37-4895-8b2d-d097e67c01ba\" (UID: \"b391ed19-3c37-4895-8b2d-d097e67c01ba\") " Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.406478 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21bba606-e7ad-4bca-8112-191b8344b686-must-gather-output\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.406534 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6cb\" (UniqueName: \"kubernetes.io/projected/21bba606-e7ad-4bca-8112-191b8344b686-kube-api-access-gs6cb\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.407088 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "b391ed19-3c37-4895-8b2d-d097e67c01ba" (UID: "b391ed19-3c37-4895-8b2d-d097e67c01ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.416516 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs" (OuterVolumeSpecName: "kube-api-access-vbqrs") pod "b391ed19-3c37-4895-8b2d-d097e67c01ba" (UID: "b391ed19-3c37-4895-8b2d-d097e67c01ba"). InnerVolumeSpecName "kube-api-access-vbqrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.417380 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b391ed19-3c37-4895-8b2d-d097e67c01ba" (UID: "b391ed19-3c37-4895-8b2d-d097e67c01ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.508296 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21bba606-e7ad-4bca-8112-191b8344b686-must-gather-output\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.509147 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gs6cb\" (UniqueName: \"kubernetes.io/projected/21bba606-e7ad-4bca-8112-191b8344b686-kube-api-access-gs6cb\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.509649 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbqrs\" (UniqueName: \"kubernetes.io/projected/b391ed19-3c37-4895-8b2d-d097e67c01ba-kube-api-access-vbqrs\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.509773 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b391ed19-3c37-4895-8b2d-d097e67c01ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.509873 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b391ed19-3c37-4895-8b2d-d097e67c01ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.509097 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21bba606-e7ad-4bca-8112-191b8344b686-must-gather-output\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.542966 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs6cb\" (UniqueName: \"kubernetes.io/projected/21bba606-e7ad-4bca-8112-191b8344b686-kube-api-access-gs6cb\") pod \"must-gather-wxf48\" (UID: \"21bba606-e7ad-4bca-8112-191b8344b686\") " pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.645138 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-l4bc9\"/\"default-dockercfg-bwxpt\"" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.651376 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4bc9/must-gather-wxf48" Jan 21 18:45:03 crc kubenswrapper[5099]: I0121 18:45:03.949627 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l4bc9/must-gather-wxf48"] Jan 21 18:45:03 crc kubenswrapper[5099]: W0121 18:45:03.955959 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21bba606_e7ad_4bca_8112_191b8344b686.slice/crio-2cdf68219e9e1690a8eb1f368fbbb1ac8dc5f674d41f31d39fde6eb9a3d345e8 WatchSource:0}: Error finding container 2cdf68219e9e1690a8eb1f368fbbb1ac8dc5f674d41f31d39fde6eb9a3d345e8: Status 404 returned error can't find the container with id 2cdf68219e9e1690a8eb1f368fbbb1ac8dc5f674d41f31d39fde6eb9a3d345e8 Jan 21 18:45:04 crc kubenswrapper[5099]: I0121 18:45:04.038762 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" event={"ID":"b391ed19-3c37-4895-8b2d-d097e67c01ba","Type":"ContainerDied","Data":"2762ea4c33a551202ec0aaa36f334eaf3bf7041acfcff142a821bdda219b25a8"} Jan 21 18:45:04 crc kubenswrapper[5099]: I0121 18:45:04.038892 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2762ea4c33a551202ec0aaa36f334eaf3bf7041acfcff142a821bdda219b25a8" Jan 21 18:45:04 crc kubenswrapper[5099]: I0121 18:45:04.039013 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l" Jan 21 18:45:04 crc kubenswrapper[5099]: I0121 18:45:04.041789 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4bc9/must-gather-wxf48" event={"ID":"21bba606-e7ad-4bca-8112-191b8344b686","Type":"ContainerStarted","Data":"2cdf68219e9e1690a8eb1f368fbbb1ac8dc5f674d41f31d39fde6eb9a3d345e8"} Jan 21 18:45:11 crc kubenswrapper[5099]: I0121 18:45:11.112523 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4bc9/must-gather-wxf48" event={"ID":"21bba606-e7ad-4bca-8112-191b8344b686","Type":"ContainerStarted","Data":"e7bdba34d7909f3cc978e6eea37b09f80b5451805bb14a375ab55e02d2086cf2"} Jan 21 18:45:11 crc kubenswrapper[5099]: I0121 18:45:11.113407 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4bc9/must-gather-wxf48" event={"ID":"21bba606-e7ad-4bca-8112-191b8344b686","Type":"ContainerStarted","Data":"3d2a7b86416fb3f4c1a7c082dfe17ac4cee57a7b00a1f69347bcb68c753a13bc"} Jan 21 18:45:11 crc kubenswrapper[5099]: I0121 18:45:11.134962 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l4bc9/must-gather-wxf48" podStartSLOduration=2.115778785 podStartE2EDuration="8.13493582s" podCreationTimestamp="2026-01-21 18:45:03 +0000 UTC" firstStartedPulling="2026-01-21 18:45:03.959470147 +0000 UTC m=+1861.373432598" lastFinishedPulling="2026-01-21 18:45:09.978627172 +0000 UTC m=+1867.392589633" observedRunningTime="2026-01-21 18:45:11.130385738 +0000 UTC m=+1868.544348199" watchObservedRunningTime="2026-01-21 18:45:11.13493582 +0000 UTC m=+1868.548898281" Jan 21 18:45:11 crc kubenswrapper[5099]: I0121 18:45:11.915314 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:45:11 crc kubenswrapper[5099]: E0121 18:45:11.915754 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:45:24 crc kubenswrapper[5099]: I0121 18:45:24.765821 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-xhj5t_178950b5-b1b9-4d7d-90b1-ba4fb79fd10d/control-plane-machine-set-operator/0.log" Jan 21 18:45:24 crc kubenswrapper[5099]: I0121 18:45:24.786877 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-477z9_0e5a1f9f-a6df-4d87-bc2d-509d2632fb32/kube-rbac-proxy/0.log" Jan 21 18:45:24 crc kubenswrapper[5099]: I0121 18:45:24.800652 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-477z9_0e5a1f9f-a6df-4d87-bc2d-509d2632fb32/machine-api-operator/0.log" Jan 21 18:45:24 crc kubenswrapper[5099]: I0121 18:45:24.914035 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:45:24 crc kubenswrapper[5099]: E0121 18:45:24.914385 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:45:29 crc kubenswrapper[5099]: I0121 18:45:29.989614 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-dkcxp_6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb/cert-manager-controller/0.log" Jan 21 18:45:30 crc kubenswrapper[5099]: I0121 18:45:30.016035 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-szdsk_4bb52920-da03-43a3-bde0-0504738f45ab/cert-manager-cainjector/0.log" Jan 21 18:45:30 crc kubenswrapper[5099]: I0121 18:45:30.031442 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-pznvg_27547e6e-e7d9-4aed-9ce4-f2cf98352e1d/cert-manager-webhook/0.log" Jan 21 18:45:35 crc kubenswrapper[5099]: I0121 18:45:35.299244 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7w26r_becb8e6d-88cd-4469-a912-f5e13a03e815/prometheus-operator/0.log" Jan 21 18:45:35 crc kubenswrapper[5099]: I0121 18:45:35.316827 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5_640f434d-3e8f-4429-a9b7-89a58100e49c/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:35 crc kubenswrapper[5099]: I0121 18:45:35.331380 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd_f5eafa3f-5eb2-445a-a0db-d33e4783861e/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:35 crc kubenswrapper[5099]: I0121 18:45:35.354554 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mjp6r_02638344-e66d-4d9e-bea9-cdf3c1040c33/operator/0.log" Jan 21 18:45:35 crc kubenswrapper[5099]: I0121 18:45:35.366559 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-9fvzk_aceb441c-bf15-4d82-908b-d5300c9a526e/perses-operator/0.log" Jan 21 18:45:38 crc kubenswrapper[5099]: I0121 18:45:38.914586 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:45:38 crc kubenswrapper[5099]: E0121 18:45:38.915480 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.881528 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt_730e984f-9245-4a98-aefb-dda6686307f1/extract/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.894784 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt_730e984f-9245-4a98-aefb-dda6686307f1/util/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.934526 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ztdt_730e984f-9245-4a98-aefb-dda6686307f1/pull/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.945842 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f_0ca4259c-807e-4b9c-bff3-026450dc0a42/extract/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.955820 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f_0ca4259c-807e-4b9c-bff3-026450dc0a42/util/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.967847 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f48c9f_0ca4259c-807e-4b9c-bff3-026450dc0a42/pull/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.986199 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7_cf4ea907-1f59-413d-bd0e-95da9a482151/extract/0.log" Jan 21 18:45:40 crc kubenswrapper[5099]: I0121 18:45:40.995273 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7_cf4ea907-1f59-413d-bd0e-95da9a482151/util/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.007120 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5elm9r7_cf4ea907-1f59-413d-bd0e-95da9a482151/pull/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.024802 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7_d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44/extract/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.039517 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7_d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44/util/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.054127 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zqkl7_d94a1ae5-e4bc-4ba3-98ee-4e0ab211db44/pull/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.203559 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f58tc_4c1f0429-8f30-4646-aa1b-9913eb49ebfe/registry-server/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.210463 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f58tc_4c1f0429-8f30-4646-aa1b-9913eb49ebfe/extract-utilities/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.222610 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f58tc_4c1f0429-8f30-4646-aa1b-9913eb49ebfe/extract-content/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.667716 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9hbr9_fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5/registry-server/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.674027 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9hbr9_fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5/extract-utilities/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.682140 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9hbr9_fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5/extract-content/0.log" Jan 21 18:45:41 crc kubenswrapper[5099]: I0121 18:45:41.700382 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-x9tj4_1df72cc8-24fd-4b08-b17a-c5509ed05634/marketplace-operator/0.log" Jan 21 18:45:42 crc kubenswrapper[5099]: I0121 18:45:42.112613 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvjkq_4cd50145-5d14-4eb5-8b45-d5c10f38600a/registry-server/0.log" Jan 21 18:45:42 crc kubenswrapper[5099]: I0121 18:45:42.127002 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvjkq_4cd50145-5d14-4eb5-8b45-d5c10f38600a/extract-utilities/0.log" Jan 21 18:45:42 crc kubenswrapper[5099]: I0121 18:45:42.139844 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvjkq_4cd50145-5d14-4eb5-8b45-d5c10f38600a/extract-content/0.log" Jan 21 18:45:46 crc kubenswrapper[5099]: I0121 18:45:46.409131 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7w26r_becb8e6d-88cd-4469-a912-f5e13a03e815/prometheus-operator/0.log" Jan 21 18:45:46 crc kubenswrapper[5099]: I0121 18:45:46.420785 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5_640f434d-3e8f-4429-a9b7-89a58100e49c/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:48 crc kubenswrapper[5099]: I0121 18:45:48.582277 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd_f5eafa3f-5eb2-445a-a0db-d33e4783861e/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:48 crc kubenswrapper[5099]: I0121 18:45:48.610302 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mjp6r_02638344-e66d-4d9e-bea9-cdf3c1040c33/operator/0.log" Jan 21 18:45:48 crc kubenswrapper[5099]: I0121 18:45:48.631304 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-9fvzk_aceb441c-bf15-4d82-908b-d5300c9a526e/perses-operator/0.log" Jan 21 18:45:53 crc kubenswrapper[5099]: I0121 18:45:53.930109 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:45:53 crc kubenswrapper[5099]: E0121 18:45:53.931510 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.673409 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7w26r_becb8e6d-88cd-4469-a912-f5e13a03e815/prometheus-operator/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.688593 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-8wjp5_640f434d-3e8f-4429-a9b7-89a58100e49c/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.710538 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bcfc495c6-vprfd_f5eafa3f-5eb2-445a-a0db-d33e4783861e/prometheus-operator-admission-webhook/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.730465 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mjp6r_02638344-e66d-4d9e-bea9-cdf3c1040c33/operator/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.750571 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-9fvzk_aceb441c-bf15-4d82-908b-d5300c9a526e/perses-operator/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.839421 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-dkcxp_6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb/cert-manager-controller/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.853137 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-szdsk_4bb52920-da03-43a3-bde0-0504738f45ab/cert-manager-cainjector/0.log" Jan 21 18:45:55 crc kubenswrapper[5099]: I0121 18:45:55.877242 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-pznvg_27547e6e-e7d9-4aed-9ce4-f2cf98352e1d/cert-manager-webhook/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.416367 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-dkcxp_6c19eb6f-a0f6-400c-9c2d-bb4a665d1bcb/cert-manager-controller/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.431770 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-szdsk_4bb52920-da03-43a3-bde0-0504738f45ab/cert-manager-cainjector/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.443392 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-pznvg_27547e6e-e7d9-4aed-9ce4-f2cf98352e1d/cert-manager-webhook/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.965608 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-xhj5t_178950b5-b1b9-4d7d-90b1-ba4fb79fd10d/control-plane-machine-set-operator/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.979313 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-477z9_0e5a1f9f-a6df-4d87-bc2d-509d2632fb32/kube-rbac-proxy/0.log" Jan 21 18:45:56 crc kubenswrapper[5099]: I0121 18:45:56.988058 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-477z9_0e5a1f9f-a6df-4d87-bc2d-509d2632fb32/machine-api-operator/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.536582 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe/alertmanager/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.546105 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe/config-reloader/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.557063 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe/oauth-proxy/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.567784 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_463f5beb-d2e5-4316-9cce-a1f8ab3ca4fe/init-config-reloader/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.584379 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_2b184571-3b36-4cc9-9494-e673a05d23a2/curl/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.593473 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_ed874079-58bf-48a1-8d42-af4769580a43/bridge/2.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.593698 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_ed874079-58bf-48a1-8d42-af4769580a43/bridge/1.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.600612 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-646c885c84-r2p27_ed874079-58bf-48a1-8d42-af4769580a43/sg-core/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.614559 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/oauth-proxy/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.624074 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/bridge/1.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.624223 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/bridge/2.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.631152 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zdnmq_a82cf411-eed8-4850-9fbf-a0c128c16d13/sg-core/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.642867 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6/bridge/2.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.643245 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6/bridge/1.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.648417 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7dc4588586-kd9rf_5b9fc1f3-5435-4492-a8fb-4ab02d6de3f6/sg-core/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.662003 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/oauth-proxy/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.669611 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/bridge/2.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.669892 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/bridge/1.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.674645 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mtmqk_0afa2545-4e28-415f-b67f-e1825e024da4/sg-core/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.690811 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/oauth-proxy/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.700951 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/bridge/2.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.701179 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/bridge/1.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.710033 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-wfgnv_201688ad-f074-4dc2-9033-36f09f9e4a9d/sg-core/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.738388 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-xddc7_07059825-6270-48dd-9737-b401f10d1f1e/default-interconnect/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.751957 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-rk86p_d63f0418-bf6c-4a0a-8b72-8fa1215358c0/prometheus-webhook-snmp/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.798001 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-77fcd4bd5f-lbdf6_4ac61964-c47e-486d-b2d3-13c9d16ae66c/manager/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.826052 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_67bd99a7-8bd7-4673-a648-c41eee407194/elasticsearch/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.836774 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_67bd99a7-8bd7-4673-a648-c41eee407194/elastic-internal-init-filesystem/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.848775 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_67bd99a7-8bd7-4673-a648-c41eee407194/elastic-internal-suspend/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.862686 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-fs494_252bafbe-0c68-4b4b-85f4-9f782a1b57b5/interconnect-operator/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.879573 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60afeeee-13e5-4557-8409-391a5ae528c8/prometheus/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.889288 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60afeeee-13e5-4557-8409-391a5ae528c8/config-reloader/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.895628 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60afeeee-13e5-4557-8409-391a5ae528c8/oauth-proxy/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.907918 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_60afeeee-13e5-4557-8409-391a5ae528c8/init-config-reloader/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.948382 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0c18b2f2-9374-4ce8-9cf2-f87d073342ce/docker-build/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.954726 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0c18b2f2-9374-4ce8-9cf2-f87d073342ce/git-clone/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.964688 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0c18b2f2-9374-4ce8-9cf2-f87d073342ce/manage-dockerfile/0.log" Jan 21 18:45:57 crc kubenswrapper[5099]: I0121 18:45:57.982867 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_3b99bf7c-18ed-4371-91d1-e75f1f80ca19/qdr/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.041092 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_356d3d5a-88fb-4d4c-bc79-cc28af1ac489/docker-build/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.050375 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_356d3d5a-88fb-4d4c-bc79-cc28af1ac489/git-clone/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.058947 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_356d3d5a-88fb-4d4c-bc79-cc28af1ac489/manage-dockerfile/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.267583 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7d4d5cb5f7-p4dpk_00721a47-1d2e-4b1f-8379-74e69855906d/operator/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.321222 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_8fd04092-0f4e-46c1-a1b0-d9c839d6edbd/docker-build/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.328649 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_8fd04092-0f4e-46c1-a1b0-d9c839d6edbd/git-clone/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.337147 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_8fd04092-0f4e-46c1-a1b0-d9c839d6edbd/manage-dockerfile/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.395822 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_69430763-5b99-43d3-9530-99409ac0586a/docker-build/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.403358 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_69430763-5b99-43d3-9530-99409ac0586a/git-clone/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.413335 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_69430763-5b99-43d3-9530-99409ac0586a/manage-dockerfile/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.472872 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_5a7cdd8f-1476-425d-a189-82a71b306bb2/docker-build/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.478890 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_5a7cdd8f-1476-425d-a189-82a71b306bb2/git-clone/0.log" Jan 21 18:45:58 crc kubenswrapper[5099]: I0121 18:45:58.490228 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_5a7cdd8f-1476-425d-a189-82a71b306bb2/manage-dockerfile/0.log" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.148979 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483686-l28rf"] Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.150278 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b391ed19-3c37-4895-8b2d-d097e67c01ba" containerName="collect-profiles" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.150310 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b391ed19-3c37-4895-8b2d-d097e67c01ba" containerName="collect-profiles" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.150522 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b391ed19-3c37-4895-8b2d-d097e67c01ba" containerName="collect-profiles" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.189987 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483686-l28rf"] Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.190278 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.195019 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.195272 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.195719 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.269650 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6tt\" (UniqueName: \"kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt\") pod \"auto-csr-approver-29483686-l28rf\" (UID: \"1bec9b3a-6ce3-48d0-aae6-9df255ae482e\") " pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.371236 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qf6tt\" (UniqueName: \"kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt\") pod \"auto-csr-approver-29483686-l28rf\" (UID: \"1bec9b3a-6ce3-48d0-aae6-9df255ae482e\") " pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.406969 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf6tt\" (UniqueName: \"kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt\") pod \"auto-csr-approver-29483686-l28rf\" (UID: \"1bec9b3a-6ce3-48d0-aae6-9df255ae482e\") " pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.517316 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:00 crc kubenswrapper[5099]: I0121 18:46:00.755185 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483686-l28rf"] Jan 21 18:46:01 crc kubenswrapper[5099]: I0121 18:46:01.678020 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483686-l28rf" event={"ID":"1bec9b3a-6ce3-48d0-aae6-9df255ae482e","Type":"ContainerStarted","Data":"7167c5ab0ae15e1fab91de10e3f74aa95752afb7199d77745082dbbdecac49b0"} Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.607393 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d44c8fc9-hzgpv_bfae1586-4cb9-4058-a0e1-151a2e3b5ad7/operator/0.log" Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.628958 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8md8w_5d639bb9-54bd-489e-b9ef-7fa15d6649d0/smoketest-collectd/0.log" Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.635200 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8md8w_5d639bb9-54bd-489e-b9ef-7fa15d6649d0/smoketest-ceilometer/0.log" Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.659567 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-l9hjv_b63ebd7b-dd5d-4649-9236-253b8c930ef9/smoketest-collectd/0.log" Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.666710 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-l9hjv_b63ebd7b-dd5d-4649-9236-253b8c930ef9/smoketest-ceilometer/0.log" Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.686667 5099 generic.go:358] "Generic (PLEG): container finished" podID="1bec9b3a-6ce3-48d0-aae6-9df255ae482e" containerID="45cea5afcfb05cd11aef972c1941f3c3a4680dad454a89f06237729f12d07885" exitCode=0 Jan 21 18:46:02 crc kubenswrapper[5099]: I0121 18:46:02.687182 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483686-l28rf" event={"ID":"1bec9b3a-6ce3-48d0-aae6-9df255ae482e","Type":"ContainerDied","Data":"45cea5afcfb05cd11aef972c1941f3c3a4680dad454a89f06237729f12d07885"} Jan 21 18:46:03 crc kubenswrapper[5099]: I0121 18:46:03.975183 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.121452 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/1.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.124118 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.132038 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf6tt\" (UniqueName: \"kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt\") pod \"1bec9b3a-6ce3-48d0-aae6-9df255ae482e\" (UID: \"1bec9b3a-6ce3-48d0-aae6-9df255ae482e\") " Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.139617 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt" (OuterVolumeSpecName: "kube-api-access-qf6tt") pod "1bec9b3a-6ce3-48d0-aae6-9df255ae482e" (UID: "1bec9b3a-6ce3-48d0-aae6-9df255ae482e"). InnerVolumeSpecName "kube-api-access-qf6tt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.145848 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/kube-multus-additional-cni-plugins/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.153454 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/egress-router-binary-copy/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.161586 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/cni-plugins/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.169965 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/bond-cni-plugin/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.177696 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/routeoverride-cni/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.186187 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/whereabouts-cni-bincopy/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.196289 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-bb5lc_fedcb6dd-93e2-4530-b748-52a296d7809d/whereabouts-cni/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.208774 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-xfrc5_da3a0959-1a85-473a-95d5-51b77e30c5da/multus-admission-controller/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.218236 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-xfrc5_da3a0959-1a85-473a-95d5-51b77e30c5da/kube-rbac-proxy/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.234517 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qf6tt\" (UniqueName: \"kubernetes.io/projected/1bec9b3a-6ce3-48d0-aae6-9df255ae482e-kube-api-access-qf6tt\") on node \"crc\" DevicePath \"\"" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.245300 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tsdhb_0d26f0ad-829f-4f64-82b5-1292bd2316f0/network-metrics-daemon/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.252172 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tsdhb_0d26f0ad-829f-4f64-82b5-1292bd2316f0/kube-rbac-proxy/0.log" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.704728 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483686-l28rf" Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.704792 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483686-l28rf" event={"ID":"1bec9b3a-6ce3-48d0-aae6-9df255ae482e","Type":"ContainerDied","Data":"7167c5ab0ae15e1fab91de10e3f74aa95752afb7199d77745082dbbdecac49b0"} Jan 21 18:46:04 crc kubenswrapper[5099]: I0121 18:46:04.705312 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7167c5ab0ae15e1fab91de10e3f74aa95752afb7199d77745082dbbdecac49b0" Jan 21 18:46:05 crc kubenswrapper[5099]: I0121 18:46:05.052988 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483680-jwp8v"] Jan 21 18:46:05 crc kubenswrapper[5099]: I0121 18:46:05.062010 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483680-jwp8v"] Jan 21 18:46:05 crc kubenswrapper[5099]: I0121 18:46:05.924843 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da873b20-de27-4eff-87df-d71a7310be1e" path="/var/lib/kubelet/pods/da873b20-de27-4eff-87df-d71a7310be1e/volumes" Jan 21 18:46:07 crc kubenswrapper[5099]: I0121 18:46:07.913786 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:46:07 crc kubenswrapper[5099]: E0121 18:46:07.915332 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:46:18 crc kubenswrapper[5099]: I0121 18:46:18.237401 5099 scope.go:117] "RemoveContainer" containerID="632ae0b6af8c1e8a0bfd3336c0bc339e9face251ea1970e3f7cbffd91e62d5fb" Jan 21 18:46:22 crc kubenswrapper[5099]: I0121 18:46:22.913371 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:46:23 crc kubenswrapper[5099]: I0121 18:46:23.872769 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9"} Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.142060 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483688-86z6l"] Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.146159 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bec9b3a-6ce3-48d0-aae6-9df255ae482e" containerName="oc" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.146271 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec9b3a-6ce3-48d0-aae6-9df255ae482e" containerName="oc" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.146465 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bec9b3a-6ce3-48d0-aae6-9df255ae482e" containerName="oc" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.156690 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483688-86z6l"] Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.157049 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.160609 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.160790 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.161029 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.208836 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9f8n\" (UniqueName: \"kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n\") pod \"auto-csr-approver-29483688-86z6l\" (UID: \"c66abe68-372d-4a35-b4ad-44b47aec51e9\") " pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.310779 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9f8n\" (UniqueName: \"kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n\") pod \"auto-csr-approver-29483688-86z6l\" (UID: \"c66abe68-372d-4a35-b4ad-44b47aec51e9\") " pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.335450 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9f8n\" (UniqueName: \"kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n\") pod \"auto-csr-approver-29483688-86z6l\" (UID: \"c66abe68-372d-4a35-b4ad-44b47aec51e9\") " pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.482294 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.729473 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483688-86z6l"] Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.737446 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:48:00 crc kubenswrapper[5099]: I0121 18:48:00.859447 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483688-86z6l" event={"ID":"c66abe68-372d-4a35-b4ad-44b47aec51e9","Type":"ContainerStarted","Data":"e15ecfbf873cbec5577247f565d52703174a68f1f5ffde76b4240cf721054850"} Jan 21 18:48:02 crc kubenswrapper[5099]: I0121 18:48:02.879663 5099 generic.go:358] "Generic (PLEG): container finished" podID="c66abe68-372d-4a35-b4ad-44b47aec51e9" containerID="ff25e23c46f1ceff91e0f82de4dfa0a61dd0cb403411c91d1dbc9a6804b1e381" exitCode=0 Jan 21 18:48:02 crc kubenswrapper[5099]: I0121 18:48:02.879846 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483688-86z6l" event={"ID":"c66abe68-372d-4a35-b4ad-44b47aec51e9","Type":"ContainerDied","Data":"ff25e23c46f1ceff91e0f82de4dfa0a61dd0cb403411c91d1dbc9a6804b1e381"} Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.153809 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.288906 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9f8n\" (UniqueName: \"kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n\") pod \"c66abe68-372d-4a35-b4ad-44b47aec51e9\" (UID: \"c66abe68-372d-4a35-b4ad-44b47aec51e9\") " Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.304549 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n" (OuterVolumeSpecName: "kube-api-access-v9f8n") pod "c66abe68-372d-4a35-b4ad-44b47aec51e9" (UID: "c66abe68-372d-4a35-b4ad-44b47aec51e9"). InnerVolumeSpecName "kube-api-access-v9f8n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.391324 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9f8n\" (UniqueName: \"kubernetes.io/projected/c66abe68-372d-4a35-b4ad-44b47aec51e9-kube-api-access-v9f8n\") on node \"crc\" DevicePath \"\"" Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.901746 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483688-86z6l" Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.901975 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483688-86z6l" event={"ID":"c66abe68-372d-4a35-b4ad-44b47aec51e9","Type":"ContainerDied","Data":"e15ecfbf873cbec5577247f565d52703174a68f1f5ffde76b4240cf721054850"} Jan 21 18:48:04 crc kubenswrapper[5099]: I0121 18:48:04.902025 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e15ecfbf873cbec5577247f565d52703174a68f1f5ffde76b4240cf721054850" Jan 21 18:48:05 crc kubenswrapper[5099]: I0121 18:48:05.225317 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483682-4gjbs"] Jan 21 18:48:05 crc kubenswrapper[5099]: I0121 18:48:05.232611 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483682-4gjbs"] Jan 21 18:48:05 crc kubenswrapper[5099]: I0121 18:48:05.923858 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550091f1-5315-4c09-9616-16b34eddef3a" path="/var/lib/kubelet/pods/550091f1-5315-4c09-9616-16b34eddef3a/volumes" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.421857 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.423966 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c66abe68-372d-4a35-b4ad-44b47aec51e9" containerName="oc" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.423991 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66abe68-372d-4a35-b4ad-44b47aec51e9" containerName="oc" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.424184 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="c66abe68-372d-4a35-b4ad-44b47aec51e9" containerName="oc" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.453585 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.453869 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.527830 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.528454 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbspr\" (UniqueName: \"kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.528541 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.629452 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.629556 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.629588 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbspr\" (UniqueName: \"kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.630411 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.630629 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.662755 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbspr\" (UniqueName: \"kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr\") pod \"certified-operators-mn7hp\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:16 crc kubenswrapper[5099]: I0121 18:48:16.808722 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:17 crc kubenswrapper[5099]: I0121 18:48:17.129232 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:18 crc kubenswrapper[5099]: I0121 18:48:18.022996 5099 generic.go:358] "Generic (PLEG): container finished" podID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerID="bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f" exitCode=0 Jan 21 18:48:18 crc kubenswrapper[5099]: I0121 18:48:18.023192 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerDied","Data":"bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f"} Jan 21 18:48:18 crc kubenswrapper[5099]: I0121 18:48:18.023752 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerStarted","Data":"104f145d1f7a30f438cb4b48838f4f223af97ec34bc9546070ced57e70fb884e"} Jan 21 18:48:18 crc kubenswrapper[5099]: I0121 18:48:18.454871 5099 scope.go:117] "RemoveContainer" containerID="57853aa65de70aa4cf974fd92c6dacc6fde271a4e3fe38fde4ce0a19f197a54a" Jan 21 18:48:19 crc kubenswrapper[5099]: I0121 18:48:19.039388 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerStarted","Data":"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f"} Jan 21 18:48:20 crc kubenswrapper[5099]: I0121 18:48:20.056080 5099 generic.go:358] "Generic (PLEG): container finished" podID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerID="6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f" exitCode=0 Jan 21 18:48:20 crc kubenswrapper[5099]: I0121 18:48:20.056184 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerDied","Data":"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f"} Jan 21 18:48:21 crc kubenswrapper[5099]: I0121 18:48:21.077906 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerStarted","Data":"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a"} Jan 21 18:48:21 crc kubenswrapper[5099]: I0121 18:48:21.103992 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mn7hp" podStartSLOduration=4.339683284 podStartE2EDuration="5.103962739s" podCreationTimestamp="2026-01-21 18:48:16 +0000 UTC" firstStartedPulling="2026-01-21 18:48:18.025215668 +0000 UTC m=+2055.439178129" lastFinishedPulling="2026-01-21 18:48:18.789495123 +0000 UTC m=+2056.203457584" observedRunningTime="2026-01-21 18:48:21.101801906 +0000 UTC m=+2058.515764367" watchObservedRunningTime="2026-01-21 18:48:21.103962739 +0000 UTC m=+2058.517925210" Jan 21 18:48:26 crc kubenswrapper[5099]: I0121 18:48:26.809778 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:26 crc kubenswrapper[5099]: I0121 18:48:26.810347 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:26 crc kubenswrapper[5099]: I0121 18:48:26.870020 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:27 crc kubenswrapper[5099]: I0121 18:48:27.194499 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:27 crc kubenswrapper[5099]: I0121 18:48:27.252486 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.164898 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mn7hp" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="registry-server" containerID="cri-o://688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a" gracePeriod=2 Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.618148 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.716792 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content\") pod \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.716907 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities\") pod \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.718040 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities" (OuterVolumeSpecName: "utilities") pod "bacc63ce-41bd-4bce-a044-029fc59a0b1d" (UID: "bacc63ce-41bd-4bce-a044-029fc59a0b1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.758481 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bacc63ce-41bd-4bce-a044-029fc59a0b1d" (UID: "bacc63ce-41bd-4bce-a044-029fc59a0b1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.817908 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbspr\" (UniqueName: \"kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr\") pod \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\" (UID: \"bacc63ce-41bd-4bce-a044-029fc59a0b1d\") " Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.818432 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.818485 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bacc63ce-41bd-4bce-a044-029fc59a0b1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.841240 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr" (OuterVolumeSpecName: "kube-api-access-hbspr") pod "bacc63ce-41bd-4bce-a044-029fc59a0b1d" (UID: "bacc63ce-41bd-4bce-a044-029fc59a0b1d"). InnerVolumeSpecName "kube-api-access-hbspr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:48:29 crc kubenswrapper[5099]: I0121 18:48:29.919188 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbspr\" (UniqueName: \"kubernetes.io/projected/bacc63ce-41bd-4bce-a044-029fc59a0b1d-kube-api-access-hbspr\") on node \"crc\" DevicePath \"\"" Jan 21 18:48:29 crc kubenswrapper[5099]: E0121 18:48:29.969509 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbacc63ce_41bd_4bce_a044_029fc59a0b1d.slice/crio-104f145d1f7a30f438cb4b48838f4f223af97ec34bc9546070ced57e70fb884e\": RecentStats: unable to find data in memory cache]" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.175318 5099 generic.go:358] "Generic (PLEG): container finished" podID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerID="688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a" exitCode=0 Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.175410 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerDied","Data":"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a"} Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.175521 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn7hp" event={"ID":"bacc63ce-41bd-4bce-a044-029fc59a0b1d","Type":"ContainerDied","Data":"104f145d1f7a30f438cb4b48838f4f223af97ec34bc9546070ced57e70fb884e"} Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.175564 5099 scope.go:117] "RemoveContainer" containerID="688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.177319 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn7hp" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.207356 5099 scope.go:117] "RemoveContainer" containerID="6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.210210 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.221572 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mn7hp"] Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.231334 5099 scope.go:117] "RemoveContainer" containerID="bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.253519 5099 scope.go:117] "RemoveContainer" containerID="688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a" Jan 21 18:48:30 crc kubenswrapper[5099]: E0121 18:48:30.254321 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a\": container with ID starting with 688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a not found: ID does not exist" containerID="688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.254369 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a"} err="failed to get container status \"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a\": rpc error: code = NotFound desc = could not find container \"688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a\": container with ID starting with 688f314fdeba3873c50bc8ebebcfedcd2cf97397e404de1dc97b9f1dbeb5cf7a not found: ID does not exist" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.254397 5099 scope.go:117] "RemoveContainer" containerID="6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f" Jan 21 18:48:30 crc kubenswrapper[5099]: E0121 18:48:30.254897 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f\": container with ID starting with 6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f not found: ID does not exist" containerID="6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.254925 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f"} err="failed to get container status \"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f\": rpc error: code = NotFound desc = could not find container \"6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f\": container with ID starting with 6ff1fb6ce4820e70cd4fd25549dee6f108965d25bb626d5dfc9231d881ef735f not found: ID does not exist" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.254940 5099 scope.go:117] "RemoveContainer" containerID="bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f" Jan 21 18:48:30 crc kubenswrapper[5099]: E0121 18:48:30.255151 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f\": container with ID starting with bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f not found: ID does not exist" containerID="bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f" Jan 21 18:48:30 crc kubenswrapper[5099]: I0121 18:48:30.255176 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f"} err="failed to get container status \"bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f\": rpc error: code = NotFound desc = could not find container \"bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f\": container with ID starting with bae1058ee5f92329aaed3f468076f18a4c15f6c299835260aeb158f96026cc6f not found: ID does not exist" Jan 21 18:48:31 crc kubenswrapper[5099]: I0121 18:48:31.924281 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" path="/var/lib/kubelet/pods/bacc63ce-41bd-4bce-a044-029fc59a0b1d/volumes" Jan 21 18:48:52 crc kubenswrapper[5099]: I0121 18:48:52.064509 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:48:52 crc kubenswrapper[5099]: I0121 18:48:52.065495 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:49:04 crc kubenswrapper[5099]: I0121 18:49:04.840848 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:49:04 crc kubenswrapper[5099]: I0121 18:49:04.842947 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:49:04 crc kubenswrapper[5099]: I0121 18:49:04.850366 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:49:04 crc kubenswrapper[5099]: I0121 18:49:04.850490 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:49:22 crc kubenswrapper[5099]: I0121 18:49:22.064411 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:49:22 crc kubenswrapper[5099]: I0121 18:49:22.065623 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.149186 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6rrv"] Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153753 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="registry-server" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153803 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="registry-server" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153835 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="extract-utilities" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153844 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="extract-utilities" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153868 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="extract-content" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.153875 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="extract-content" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.154045 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="bacc63ce-41bd-4bce-a044-029fc59a0b1d" containerName="registry-server" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.166417 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.166722 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6rrv"] Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.322011 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9w6d\" (UniqueName: \"kubernetes.io/projected/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-kube-api-access-b9w6d\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.322586 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-utilities\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.322856 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-catalog-content\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.424123 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-utilities\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.424201 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-catalog-content\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.424247 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9w6d\" (UniqueName: \"kubernetes.io/projected/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-kube-api-access-b9w6d\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.425164 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-utilities\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.425288 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-catalog-content\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.451520 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9w6d\" (UniqueName: \"kubernetes.io/projected/fc8e4c77-fe58-4966-8ffc-aa15256d22d7-kube-api-access-b9w6d\") pod \"community-operators-s6rrv\" (UID: \"fc8e4c77-fe58-4966-8ffc-aa15256d22d7\") " pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.492017 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:26 crc kubenswrapper[5099]: I0121 18:49:26.968599 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6rrv"] Jan 21 18:49:27 crc kubenswrapper[5099]: I0121 18:49:27.787746 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc8e4c77-fe58-4966-8ffc-aa15256d22d7" containerID="765461aba96496140fbe890752f6c75c4bded0e50ec0f4555527e9bfa0aa8f4e" exitCode=0 Jan 21 18:49:27 crc kubenswrapper[5099]: I0121 18:49:27.788364 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6rrv" event={"ID":"fc8e4c77-fe58-4966-8ffc-aa15256d22d7","Type":"ContainerDied","Data":"765461aba96496140fbe890752f6c75c4bded0e50ec0f4555527e9bfa0aa8f4e"} Jan 21 18:49:27 crc kubenswrapper[5099]: I0121 18:49:27.793589 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6rrv" event={"ID":"fc8e4c77-fe58-4966-8ffc-aa15256d22d7","Type":"ContainerStarted","Data":"d7209ba81a3ab43993248e546d5d28fe7520fc5930ee0d00cd287657599a48ab"} Jan 21 18:49:31 crc kubenswrapper[5099]: I0121 18:49:31.850589 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6rrv" event={"ID":"fc8e4c77-fe58-4966-8ffc-aa15256d22d7","Type":"ContainerStarted","Data":"739b41489c2c8dd5869a6b9c45f92540893f851bd25d3a9ea444d4229e25f5c6"} Jan 21 18:49:32 crc kubenswrapper[5099]: I0121 18:49:32.860038 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc8e4c77-fe58-4966-8ffc-aa15256d22d7" containerID="739b41489c2c8dd5869a6b9c45f92540893f851bd25d3a9ea444d4229e25f5c6" exitCode=0 Jan 21 18:49:32 crc kubenswrapper[5099]: I0121 18:49:32.860340 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6rrv" event={"ID":"fc8e4c77-fe58-4966-8ffc-aa15256d22d7","Type":"ContainerDied","Data":"739b41489c2c8dd5869a6b9c45f92540893f851bd25d3a9ea444d4229e25f5c6"} Jan 21 18:49:33 crc kubenswrapper[5099]: I0121 18:49:33.871892 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6rrv" event={"ID":"fc8e4c77-fe58-4966-8ffc-aa15256d22d7","Type":"ContainerStarted","Data":"15e2bea2e671be1fdc4d2099bf88de25ae4861b327a9a75382271ab2b270f87b"} Jan 21 18:49:33 crc kubenswrapper[5099]: I0121 18:49:33.903540 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6rrv" podStartSLOduration=4.306549632 podStartE2EDuration="7.903519436s" podCreationTimestamp="2026-01-21 18:49:26 +0000 UTC" firstStartedPulling="2026-01-21 18:49:27.792113247 +0000 UTC m=+2125.206075708" lastFinishedPulling="2026-01-21 18:49:31.389083051 +0000 UTC m=+2128.803045512" observedRunningTime="2026-01-21 18:49:33.899918388 +0000 UTC m=+2131.313880859" watchObservedRunningTime="2026-01-21 18:49:33.903519436 +0000 UTC m=+2131.317481897" Jan 21 18:49:36 crc kubenswrapper[5099]: I0121 18:49:36.493045 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:36 crc kubenswrapper[5099]: I0121 18:49:36.493693 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:36 crc kubenswrapper[5099]: I0121 18:49:36.547048 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:46 crc kubenswrapper[5099]: I0121 18:49:46.947172 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6rrv" Jan 21 18:49:47 crc kubenswrapper[5099]: I0121 18:49:47.032014 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6rrv"] Jan 21 18:49:47 crc kubenswrapper[5099]: I0121 18:49:47.073304 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:49:47 crc kubenswrapper[5099]: I0121 18:49:47.073717 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hbr9" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="registry-server" containerID="cri-o://f4f67cd0d06c0b5d4992d7bc431422c535fc4cfb8439a1427898d3509a0b9a3e" gracePeriod=2 Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.018260 5099 generic.go:358] "Generic (PLEG): container finished" podID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerID="f4f67cd0d06c0b5d4992d7bc431422c535fc4cfb8439a1427898d3509a0b9a3e" exitCode=0 Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.018325 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerDied","Data":"f4f67cd0d06c0b5d4992d7bc431422c535fc4cfb8439a1427898d3509a0b9a3e"} Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.300608 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.378915 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmlk\" (UniqueName: \"kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk\") pod \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.379013 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities\") pod \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.379112 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content\") pod \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\" (UID: \"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5\") " Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.379806 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities" (OuterVolumeSpecName: "utilities") pod "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" (UID: "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.388544 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk" (OuterVolumeSpecName: "kube-api-access-7nmlk") pod "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" (UID: "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5"). InnerVolumeSpecName "kube-api-access-7nmlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.448368 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" (UID: "fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.481379 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.481429 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nmlk\" (UniqueName: \"kubernetes.io/projected/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-kube-api-access-7nmlk\") on node \"crc\" DevicePath \"\"" Jan 21 18:49:49 crc kubenswrapper[5099]: I0121 18:49:49.481442 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.031662 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbr9" event={"ID":"fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5","Type":"ContainerDied","Data":"08e52abc13ad8ae2e68e0cffe1f5c74d463d54585ac73a7664783c43e15062ec"} Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.031753 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbr9" Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.031763 5099 scope.go:117] "RemoveContainer" containerID="f4f67cd0d06c0b5d4992d7bc431422c535fc4cfb8439a1427898d3509a0b9a3e" Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.060657 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.070723 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hbr9"] Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.072365 5099 scope.go:117] "RemoveContainer" containerID="55c9623d4fe4213f6abad7722a68ec5e010a6da82d526f22511aac30accb5825" Jan 21 18:49:50 crc kubenswrapper[5099]: I0121 18:49:50.102202 5099 scope.go:117] "RemoveContainer" containerID="e57ed611df7f2ae6b01ebbefe199af78cbfe99d70c13e364967cecf9f0015a37" Jan 21 18:49:51 crc kubenswrapper[5099]: I0121 18:49:51.925200 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" path="/var/lib/kubelet/pods/fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5/volumes" Jan 21 18:49:52 crc kubenswrapper[5099]: I0121 18:49:52.064499 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:49:52 crc kubenswrapper[5099]: I0121 18:49:52.064606 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:49:52 crc kubenswrapper[5099]: I0121 18:49:52.064666 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:49:52 crc kubenswrapper[5099]: I0121 18:49:52.065555 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:49:52 crc kubenswrapper[5099]: I0121 18:49:52.065626 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9" gracePeriod=600 Jan 21 18:49:53 crc kubenswrapper[5099]: I0121 18:49:53.063905 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9" exitCode=0 Jan 21 18:49:53 crc kubenswrapper[5099]: I0121 18:49:53.063995 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9"} Jan 21 18:49:53 crc kubenswrapper[5099]: I0121 18:49:53.064535 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f"} Jan 21 18:49:53 crc kubenswrapper[5099]: I0121 18:49:53.064561 5099 scope.go:117] "RemoveContainer" containerID="2597b26be062a6fac05730bc303e55baf9c4e17d7a57f2962c96e40cd24ca0da" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.156532 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483690-zm56j"] Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158313 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="extract-utilities" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158330 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="extract-utilities" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158341 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="registry-server" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158347 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="registry-server" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158384 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="extract-content" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158390 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="extract-content" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.158518 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb6ec1ca-5f61-427a-b0db-e5eb8a083ad5" containerName="registry-server" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.186401 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483690-zm56j"] Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.186605 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.190573 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.191025 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.191221 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.293696 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlqg7\" (UniqueName: \"kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7\") pod \"auto-csr-approver-29483690-zm56j\" (UID: \"80f632bf-01d7-42ef-9a85-dbeecbfd684a\") " pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.396313 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nlqg7\" (UniqueName: \"kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7\") pod \"auto-csr-approver-29483690-zm56j\" (UID: \"80f632bf-01d7-42ef-9a85-dbeecbfd684a\") " pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.431324 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlqg7\" (UniqueName: \"kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7\") pod \"auto-csr-approver-29483690-zm56j\" (UID: \"80f632bf-01d7-42ef-9a85-dbeecbfd684a\") " pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.517865 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:00 crc kubenswrapper[5099]: I0121 18:50:00.763289 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483690-zm56j"] Jan 21 18:50:01 crc kubenswrapper[5099]: I0121 18:50:01.152105 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483690-zm56j" event={"ID":"80f632bf-01d7-42ef-9a85-dbeecbfd684a","Type":"ContainerStarted","Data":"96ce2edd6ea970bd318be4c3bb14559d004b3fd10fd7afa946858ff10aa99b28"} Jan 21 18:50:03 crc kubenswrapper[5099]: I0121 18:50:03.175197 5099 generic.go:358] "Generic (PLEG): container finished" podID="80f632bf-01d7-42ef-9a85-dbeecbfd684a" containerID="01353e8f923a772b98d92254a711486a802f9349cb0bcfe0c8f31a9f06133b7a" exitCode=0 Jan 21 18:50:03 crc kubenswrapper[5099]: I0121 18:50:03.175319 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483690-zm56j" event={"ID":"80f632bf-01d7-42ef-9a85-dbeecbfd684a","Type":"ContainerDied","Data":"01353e8f923a772b98d92254a711486a802f9349cb0bcfe0c8f31a9f06133b7a"} Jan 21 18:50:04 crc kubenswrapper[5099]: I0121 18:50:04.457844 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:04 crc kubenswrapper[5099]: I0121 18:50:04.573115 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlqg7\" (UniqueName: \"kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7\") pod \"80f632bf-01d7-42ef-9a85-dbeecbfd684a\" (UID: \"80f632bf-01d7-42ef-9a85-dbeecbfd684a\") " Jan 21 18:50:04 crc kubenswrapper[5099]: I0121 18:50:04.598913 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7" (OuterVolumeSpecName: "kube-api-access-nlqg7") pod "80f632bf-01d7-42ef-9a85-dbeecbfd684a" (UID: "80f632bf-01d7-42ef-9a85-dbeecbfd684a"). InnerVolumeSpecName "kube-api-access-nlqg7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:50:04 crc kubenswrapper[5099]: I0121 18:50:04.675514 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlqg7\" (UniqueName: \"kubernetes.io/projected/80f632bf-01d7-42ef-9a85-dbeecbfd684a-kube-api-access-nlqg7\") on node \"crc\" DevicePath \"\"" Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.199210 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483690-zm56j" event={"ID":"80f632bf-01d7-42ef-9a85-dbeecbfd684a","Type":"ContainerDied","Data":"96ce2edd6ea970bd318be4c3bb14559d004b3fd10fd7afa946858ff10aa99b28"} Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.199294 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ce2edd6ea970bd318be4c3bb14559d004b3fd10fd7afa946858ff10aa99b28" Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.199954 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483690-zm56j" Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.530085 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483684-xbtgm"] Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.536320 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483684-xbtgm"] Jan 21 18:50:05 crc kubenswrapper[5099]: I0121 18:50:05.924261 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5a191e-9a0f-46b8-ac2e-c1804f5ee270" path="/var/lib/kubelet/pods/0c5a191e-9a0f-46b8-ac2e-c1804f5ee270/volumes" Jan 21 18:50:18 crc kubenswrapper[5099]: I0121 18:50:18.679540 5099 scope.go:117] "RemoveContainer" containerID="3a5b1434bcf08e3b9c5a22f7d965d44e5f741a500f3d326eab1b666fb4679d7e" Jan 21 18:51:52 crc kubenswrapper[5099]: I0121 18:51:52.065372 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:51:52 crc kubenswrapper[5099]: I0121 18:51:52.066095 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.151688 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483692-6j7vw"] Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.158323 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="80f632bf-01d7-42ef-9a85-dbeecbfd684a" containerName="oc" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.158350 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f632bf-01d7-42ef-9a85-dbeecbfd684a" containerName="oc" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.158524 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="80f632bf-01d7-42ef-9a85-dbeecbfd684a" containerName="oc" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.163676 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483692-6j7vw"] Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.163905 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.204574 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.205173 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.206966 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.308295 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhjb\" (UniqueName: \"kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb\") pod \"auto-csr-approver-29483692-6j7vw\" (UID: \"a9efd42e-0a46-48f5-bab3-a297dc87775e\") " pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.410532 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhjb\" (UniqueName: \"kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb\") pod \"auto-csr-approver-29483692-6j7vw\" (UID: \"a9efd42e-0a46-48f5-bab3-a297dc87775e\") " pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.432925 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhjb\" (UniqueName: \"kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb\") pod \"auto-csr-approver-29483692-6j7vw\" (UID: \"a9efd42e-0a46-48f5-bab3-a297dc87775e\") " pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.589379 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:00 crc kubenswrapper[5099]: I0121 18:52:00.899852 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483692-6j7vw"] Jan 21 18:52:01 crc kubenswrapper[5099]: I0121 18:52:01.311704 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" event={"ID":"a9efd42e-0a46-48f5-bab3-a297dc87775e","Type":"ContainerStarted","Data":"9ca6bb586a490da900a1130e5e86b6bfdcc3b5057458182ae3d5df86c1b1ed78"} Jan 21 18:52:02 crc kubenswrapper[5099]: I0121 18:52:02.321493 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" event={"ID":"a9efd42e-0a46-48f5-bab3-a297dc87775e","Type":"ContainerStarted","Data":"46e6c69e7fd8401f9ae09e1728b9910685b5099961a82801aeee29443174c59f"} Jan 21 18:52:02 crc kubenswrapper[5099]: I0121 18:52:02.356483 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" podStartSLOduration=1.48397609 podStartE2EDuration="2.356446108s" podCreationTimestamp="2026-01-21 18:52:00 +0000 UTC" firstStartedPulling="2026-01-21 18:52:00.904595083 +0000 UTC m=+2278.318557554" lastFinishedPulling="2026-01-21 18:52:01.777065111 +0000 UTC m=+2279.191027572" observedRunningTime="2026-01-21 18:52:02.352883002 +0000 UTC m=+2279.766845473" watchObservedRunningTime="2026-01-21 18:52:02.356446108 +0000 UTC m=+2279.770408579" Jan 21 18:52:03 crc kubenswrapper[5099]: I0121 18:52:03.335115 5099 generic.go:358] "Generic (PLEG): container finished" podID="a9efd42e-0a46-48f5-bab3-a297dc87775e" containerID="46e6c69e7fd8401f9ae09e1728b9910685b5099961a82801aeee29443174c59f" exitCode=0 Jan 21 18:52:03 crc kubenswrapper[5099]: I0121 18:52:03.335209 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" event={"ID":"a9efd42e-0a46-48f5-bab3-a297dc87775e","Type":"ContainerDied","Data":"46e6c69e7fd8401f9ae09e1728b9910685b5099961a82801aeee29443174c59f"} Jan 21 18:52:04 crc kubenswrapper[5099]: I0121 18:52:04.669828 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:04 crc kubenswrapper[5099]: I0121 18:52:04.797698 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhjb\" (UniqueName: \"kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb\") pod \"a9efd42e-0a46-48f5-bab3-a297dc87775e\" (UID: \"a9efd42e-0a46-48f5-bab3-a297dc87775e\") " Jan 21 18:52:04 crc kubenswrapper[5099]: I0121 18:52:04.807125 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb" (OuterVolumeSpecName: "kube-api-access-hmhjb") pod "a9efd42e-0a46-48f5-bab3-a297dc87775e" (UID: "a9efd42e-0a46-48f5-bab3-a297dc87775e"). InnerVolumeSpecName "kube-api-access-hmhjb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:52:04 crc kubenswrapper[5099]: I0121 18:52:04.900599 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hmhjb\" (UniqueName: \"kubernetes.io/projected/a9efd42e-0a46-48f5-bab3-a297dc87775e-kube-api-access-hmhjb\") on node \"crc\" DevicePath \"\"" Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.380641 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" event={"ID":"a9efd42e-0a46-48f5-bab3-a297dc87775e","Type":"ContainerDied","Data":"9ca6bb586a490da900a1130e5e86b6bfdcc3b5057458182ae3d5df86c1b1ed78"} Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.381572 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca6bb586a490da900a1130e5e86b6bfdcc3b5057458182ae3d5df86c1b1ed78" Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.381855 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483692-6j7vw" Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.434525 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483686-l28rf"] Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.444353 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483686-l28rf"] Jan 21 18:52:05 crc kubenswrapper[5099]: I0121 18:52:05.930359 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bec9b3a-6ce3-48d0-aae6-9df255ae482e" path="/var/lib/kubelet/pods/1bec9b3a-6ce3-48d0-aae6-9df255ae482e/volumes" Jan 21 18:52:18 crc kubenswrapper[5099]: I0121 18:52:18.888283 5099 scope.go:117] "RemoveContainer" containerID="45cea5afcfb05cd11aef972c1941f3c3a4680dad454a89f06237729f12d07885" Jan 21 18:52:22 crc kubenswrapper[5099]: I0121 18:52:22.065076 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:52:22 crc kubenswrapper[5099]: I0121 18:52:22.065601 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.064647 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.065705 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.065855 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.066844 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.066939 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" gracePeriod=600 Jan 21 18:52:52 crc kubenswrapper[5099]: E0121 18:52:52.212596 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.819438 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" exitCode=0 Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.819505 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f"} Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.819566 5099 scope.go:117] "RemoveContainer" containerID="389f7e89eb63b547e19ab7ac39d47c2a3189e7b6cf539b6005f3fc375000fdb9" Jan 21 18:52:52 crc kubenswrapper[5099]: I0121 18:52:52.820451 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:52:52 crc kubenswrapper[5099]: E0121 18:52:52.821113 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:53:03 crc kubenswrapper[5099]: I0121 18:53:03.923313 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:53:03 crc kubenswrapper[5099]: E0121 18:53:03.924117 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:53:18 crc kubenswrapper[5099]: I0121 18:53:18.914166 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:53:18 crc kubenswrapper[5099]: E0121 18:53:18.915290 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:53:33 crc kubenswrapper[5099]: I0121 18:53:33.922032 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:53:33 crc kubenswrapper[5099]: E0121 18:53:33.923349 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:53:47 crc kubenswrapper[5099]: I0121 18:53:47.914554 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:53:47 crc kubenswrapper[5099]: E0121 18:53:47.915292 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.137691 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483694-5bjtk"] Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.139507 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9efd42e-0a46-48f5-bab3-a297dc87775e" containerName="oc" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.139530 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9efd42e-0a46-48f5-bab3-a297dc87775e" containerName="oc" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.139675 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9efd42e-0a46-48f5-bab3-a297dc87775e" containerName="oc" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.145215 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.147786 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.149755 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.150451 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.164522 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483694-5bjtk"] Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.331629 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtsz\" (UniqueName: \"kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz\") pod \"auto-csr-approver-29483694-5bjtk\" (UID: \"142fcda0-4913-4ba4-8205-ed544501bdc5\") " pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.434162 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtsz\" (UniqueName: \"kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz\") pod \"auto-csr-approver-29483694-5bjtk\" (UID: \"142fcda0-4913-4ba4-8205-ed544501bdc5\") " pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.469423 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtsz\" (UniqueName: \"kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz\") pod \"auto-csr-approver-29483694-5bjtk\" (UID: \"142fcda0-4913-4ba4-8205-ed544501bdc5\") " pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:00 crc kubenswrapper[5099]: I0121 18:54:00.769452 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:01 crc kubenswrapper[5099]: I0121 18:54:01.107012 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483694-5bjtk"] Jan 21 18:54:01 crc kubenswrapper[5099]: I0121 18:54:01.112815 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 18:54:01 crc kubenswrapper[5099]: I0121 18:54:01.522491 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" event={"ID":"142fcda0-4913-4ba4-8205-ed544501bdc5","Type":"ContainerStarted","Data":"ea499e5c9e8f6920956831893a2d6192b186212a4ab6578494a2d9fea21c821d"} Jan 21 18:54:01 crc kubenswrapper[5099]: I0121 18:54:01.914511 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:54:01 crc kubenswrapper[5099]: E0121 18:54:01.915864 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:54:02 crc kubenswrapper[5099]: I0121 18:54:02.534547 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" event={"ID":"142fcda0-4913-4ba4-8205-ed544501bdc5","Type":"ContainerStarted","Data":"edbdca7359202df10dbfb1a7035d4ee71d12483c1f95545f023e290fbc5d866a"} Jan 21 18:54:02 crc kubenswrapper[5099]: I0121 18:54:02.554241 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" podStartSLOduration=1.597599046 podStartE2EDuration="2.554218568s" podCreationTimestamp="2026-01-21 18:54:00 +0000 UTC" firstStartedPulling="2026-01-21 18:54:01.11303102 +0000 UTC m=+2398.526993481" lastFinishedPulling="2026-01-21 18:54:02.069650542 +0000 UTC m=+2399.483613003" observedRunningTime="2026-01-21 18:54:02.549295519 +0000 UTC m=+2399.963258000" watchObservedRunningTime="2026-01-21 18:54:02.554218568 +0000 UTC m=+2399.968181029" Jan 21 18:54:03 crc kubenswrapper[5099]: I0121 18:54:03.552607 5099 generic.go:358] "Generic (PLEG): container finished" podID="142fcda0-4913-4ba4-8205-ed544501bdc5" containerID="edbdca7359202df10dbfb1a7035d4ee71d12483c1f95545f023e290fbc5d866a" exitCode=0 Jan 21 18:54:03 crc kubenswrapper[5099]: I0121 18:54:03.552709 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" event={"ID":"142fcda0-4913-4ba4-8205-ed544501bdc5","Type":"ContainerDied","Data":"edbdca7359202df10dbfb1a7035d4ee71d12483c1f95545f023e290fbc5d866a"} Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.831613 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.909448 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrtsz\" (UniqueName: \"kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz\") pod \"142fcda0-4913-4ba4-8205-ed544501bdc5\" (UID: \"142fcda0-4913-4ba4-8205-ed544501bdc5\") " Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.917536 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz" (OuterVolumeSpecName: "kube-api-access-vrtsz") pod "142fcda0-4913-4ba4-8205-ed544501bdc5" (UID: "142fcda0-4913-4ba4-8205-ed544501bdc5"). InnerVolumeSpecName "kube-api-access-vrtsz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.993897 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.993916 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:54:04 crc kubenswrapper[5099]: I0121 18:54:04.999909 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.000415 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.011364 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vrtsz\" (UniqueName: \"kubernetes.io/projected/142fcda0-4913-4ba4-8205-ed544501bdc5-kube-api-access-vrtsz\") on node \"crc\" DevicePath \"\"" Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.573668 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.573798 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483694-5bjtk" event={"ID":"142fcda0-4913-4ba4-8205-ed544501bdc5","Type":"ContainerDied","Data":"ea499e5c9e8f6920956831893a2d6192b186212a4ab6578494a2d9fea21c821d"} Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.574331 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea499e5c9e8f6920956831893a2d6192b186212a4ab6578494a2d9fea21c821d" Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.627754 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483688-86z6l"] Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.637422 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483688-86z6l"] Jan 21 18:54:05 crc kubenswrapper[5099]: I0121 18:54:05.922782 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66abe68-372d-4a35-b4ad-44b47aec51e9" path="/var/lib/kubelet/pods/c66abe68-372d-4a35-b4ad-44b47aec51e9/volumes" Jan 21 18:54:16 crc kubenswrapper[5099]: I0121 18:54:16.914949 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:54:16 crc kubenswrapper[5099]: E0121 18:54:16.916242 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:54:19 crc kubenswrapper[5099]: I0121 18:54:19.043438 5099 scope.go:117] "RemoveContainer" containerID="ff25e23c46f1ceff91e0f82de4dfa0a61dd0cb403411c91d1dbc9a6804b1e381" Jan 21 18:54:31 crc kubenswrapper[5099]: I0121 18:54:31.914446 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:54:31 crc kubenswrapper[5099]: E0121 18:54:31.916145 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:54:46 crc kubenswrapper[5099]: I0121 18:54:46.914233 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:54:46 crc kubenswrapper[5099]: E0121 18:54:46.915407 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:54:57 crc kubenswrapper[5099]: I0121 18:54:57.914403 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:54:57 crc kubenswrapper[5099]: E0121 18:54:57.915273 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:55:11 crc kubenswrapper[5099]: I0121 18:55:11.917547 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:55:11 crc kubenswrapper[5099]: E0121 18:55:11.919126 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:55:25 crc kubenswrapper[5099]: I0121 18:55:25.915314 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:55:25 crc kubenswrapper[5099]: E0121 18:55:25.916262 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:55:40 crc kubenswrapper[5099]: I0121 18:55:40.932474 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:55:40 crc kubenswrapper[5099]: E0121 18:55:40.933524 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:55:55 crc kubenswrapper[5099]: I0121 18:55:55.913974 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:55:55 crc kubenswrapper[5099]: E0121 18:55:55.915227 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.149415 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483696-4m4xz"] Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.151067 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="142fcda0-4913-4ba4-8205-ed544501bdc5" containerName="oc" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.151098 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="142fcda0-4913-4ba4-8205-ed544501bdc5" containerName="oc" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.151295 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="142fcda0-4913-4ba4-8205-ed544501bdc5" containerName="oc" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.156212 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483696-4m4xz"] Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.156233 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.159795 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.160063 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.160187 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.306803 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh28z\" (UniqueName: \"kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z\") pod \"auto-csr-approver-29483696-4m4xz\" (UID: \"58bf5cc4-83af-4c17-a2fa-b8de56012d23\") " pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.409234 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vh28z\" (UniqueName: \"kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z\") pod \"auto-csr-approver-29483696-4m4xz\" (UID: \"58bf5cc4-83af-4c17-a2fa-b8de56012d23\") " pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.446615 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh28z\" (UniqueName: \"kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z\") pod \"auto-csr-approver-29483696-4m4xz\" (UID: \"58bf5cc4-83af-4c17-a2fa-b8de56012d23\") " pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.489721 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:00 crc kubenswrapper[5099]: I0121 18:56:00.744017 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483696-4m4xz"] Jan 21 18:56:01 crc kubenswrapper[5099]: I0121 18:56:01.732616 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" event={"ID":"58bf5cc4-83af-4c17-a2fa-b8de56012d23","Type":"ContainerStarted","Data":"068c46831c1259ce840f54b597ec98c25d0059890ab327a8a6cc53e162f27e7d"} Jan 21 18:56:02 crc kubenswrapper[5099]: I0121 18:56:02.743459 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" event={"ID":"58bf5cc4-83af-4c17-a2fa-b8de56012d23","Type":"ContainerStarted","Data":"a2ba036834e456bae512f9ae6a5afda9f27f0ad6ac1916616dc08dbe3eb944fc"} Jan 21 18:56:02 crc kubenswrapper[5099]: I0121 18:56:02.763543 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" podStartSLOduration=1.306770202 podStartE2EDuration="2.763512735s" podCreationTimestamp="2026-01-21 18:56:00 +0000 UTC" firstStartedPulling="2026-01-21 18:56:00.756881287 +0000 UTC m=+2518.170843748" lastFinishedPulling="2026-01-21 18:56:02.21362382 +0000 UTC m=+2519.627586281" observedRunningTime="2026-01-21 18:56:02.75790876 +0000 UTC m=+2520.171871231" watchObservedRunningTime="2026-01-21 18:56:02.763512735 +0000 UTC m=+2520.177475196" Jan 21 18:56:03 crc kubenswrapper[5099]: I0121 18:56:03.756445 5099 generic.go:358] "Generic (PLEG): container finished" podID="58bf5cc4-83af-4c17-a2fa-b8de56012d23" containerID="a2ba036834e456bae512f9ae6a5afda9f27f0ad6ac1916616dc08dbe3eb944fc" exitCode=0 Jan 21 18:56:03 crc kubenswrapper[5099]: I0121 18:56:03.756591 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" event={"ID":"58bf5cc4-83af-4c17-a2fa-b8de56012d23","Type":"ContainerDied","Data":"a2ba036834e456bae512f9ae6a5afda9f27f0ad6ac1916616dc08dbe3eb944fc"} Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.060160 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.190866 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh28z\" (UniqueName: \"kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z\") pod \"58bf5cc4-83af-4c17-a2fa-b8de56012d23\" (UID: \"58bf5cc4-83af-4c17-a2fa-b8de56012d23\") " Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.196691 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z" (OuterVolumeSpecName: "kube-api-access-vh28z") pod "58bf5cc4-83af-4c17-a2fa-b8de56012d23" (UID: "58bf5cc4-83af-4c17-a2fa-b8de56012d23"). InnerVolumeSpecName "kube-api-access-vh28z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.293893 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vh28z\" (UniqueName: \"kubernetes.io/projected/58bf5cc4-83af-4c17-a2fa-b8de56012d23-kube-api-access-vh28z\") on node \"crc\" DevicePath \"\"" Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.790709 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.790905 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483696-4m4xz" event={"ID":"58bf5cc4-83af-4c17-a2fa-b8de56012d23","Type":"ContainerDied","Data":"068c46831c1259ce840f54b597ec98c25d0059890ab327a8a6cc53e162f27e7d"} Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.791017 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068c46831c1259ce840f54b597ec98c25d0059890ab327a8a6cc53e162f27e7d" Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.848103 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483690-zm56j"] Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.854796 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483690-zm56j"] Jan 21 18:56:05 crc kubenswrapper[5099]: I0121 18:56:05.923919 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f632bf-01d7-42ef-9a85-dbeecbfd684a" path="/var/lib/kubelet/pods/80f632bf-01d7-42ef-9a85-dbeecbfd684a/volumes" Jan 21 18:56:07 crc kubenswrapper[5099]: I0121 18:56:07.914785 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:56:07 crc kubenswrapper[5099]: E0121 18:56:07.916101 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:56:19 crc kubenswrapper[5099]: I0121 18:56:19.220299 5099 scope.go:117] "RemoveContainer" containerID="01353e8f923a772b98d92254a711486a802f9349cb0bcfe0c8f31a9f06133b7a" Jan 21 18:56:22 crc kubenswrapper[5099]: I0121 18:56:22.913951 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:56:22 crc kubenswrapper[5099]: E0121 18:56:22.914998 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.863470 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.864788 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58bf5cc4-83af-4c17-a2fa-b8de56012d23" containerName="oc" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.864805 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bf5cc4-83af-4c17-a2fa-b8de56012d23" containerName="oc" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.864980 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="58bf5cc4-83af-4c17-a2fa-b8de56012d23" containerName="oc" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.871606 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.885666 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.915978 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.916075 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:26 crc kubenswrapper[5099]: I0121 18:56:26.916112 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5dbh\" (UniqueName: \"kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.017237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.017313 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.017346 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5dbh\" (UniqueName: \"kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.017868 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.017938 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.044249 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5dbh\" (UniqueName: \"kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh\") pod \"redhat-operators-hngxj\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.236445 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:27 crc kubenswrapper[5099]: I0121 18:56:27.504217 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:28 crc kubenswrapper[5099]: I0121 18:56:28.034092 5099 generic.go:358] "Generic (PLEG): container finished" podID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerID="aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8" exitCode=0 Jan 21 18:56:28 crc kubenswrapper[5099]: I0121 18:56:28.034207 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerDied","Data":"aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8"} Jan 21 18:56:28 crc kubenswrapper[5099]: I0121 18:56:28.034711 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerStarted","Data":"1fc82b8fb1401fd3c74d3c3ae0b9279dd4816759b8157e9c5f90256e6f0b4e2c"} Jan 21 18:56:30 crc kubenswrapper[5099]: I0121 18:56:30.079475 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerDied","Data":"917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41"} Jan 21 18:56:30 crc kubenswrapper[5099]: I0121 18:56:30.080459 5099 generic.go:358] "Generic (PLEG): container finished" podID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerID="917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41" exitCode=0 Jan 21 18:56:31 crc kubenswrapper[5099]: I0121 18:56:31.095610 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerStarted","Data":"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636"} Jan 21 18:56:31 crc kubenswrapper[5099]: I0121 18:56:31.127325 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hngxj" podStartSLOduration=4.200406142 podStartE2EDuration="5.127302655s" podCreationTimestamp="2026-01-21 18:56:26 +0000 UTC" firstStartedPulling="2026-01-21 18:56:28.035781946 +0000 UTC m=+2545.449744447" lastFinishedPulling="2026-01-21 18:56:28.962678499 +0000 UTC m=+2546.376640960" observedRunningTime="2026-01-21 18:56:31.125658915 +0000 UTC m=+2548.539621396" watchObservedRunningTime="2026-01-21 18:56:31.127302655 +0000 UTC m=+2548.541265106" Jan 21 18:56:34 crc kubenswrapper[5099]: I0121 18:56:34.914628 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:56:34 crc kubenswrapper[5099]: E0121 18:56:34.915631 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:56:37 crc kubenswrapper[5099]: I0121 18:56:37.237489 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:37 crc kubenswrapper[5099]: I0121 18:56:37.237601 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:37 crc kubenswrapper[5099]: I0121 18:56:37.302699 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:38 crc kubenswrapper[5099]: I0121 18:56:38.231582 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:40 crc kubenswrapper[5099]: I0121 18:56:40.453626 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:40 crc kubenswrapper[5099]: I0121 18:56:40.454144 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hngxj" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="registry-server" containerID="cri-o://0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636" gracePeriod=2 Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.975203 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.983400 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities\") pod \"576ca576-b77a-4b3d-87d2-74ab94c5b939\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.983537 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content\") pod \"576ca576-b77a-4b3d-87d2-74ab94c5b939\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.983825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5dbh\" (UniqueName: \"kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh\") pod \"576ca576-b77a-4b3d-87d2-74ab94c5b939\" (UID: \"576ca576-b77a-4b3d-87d2-74ab94c5b939\") " Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.985521 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities" (OuterVolumeSpecName: "utilities") pod "576ca576-b77a-4b3d-87d2-74ab94c5b939" (UID: "576ca576-b77a-4b3d-87d2-74ab94c5b939"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:56:41 crc kubenswrapper[5099]: I0121 18:56:41.995597 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh" (OuterVolumeSpecName: "kube-api-access-x5dbh") pod "576ca576-b77a-4b3d-87d2-74ab94c5b939" (UID: "576ca576-b77a-4b3d-87d2-74ab94c5b939"). InnerVolumeSpecName "kube-api-access-x5dbh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.085448 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x5dbh\" (UniqueName: \"kubernetes.io/projected/576ca576-b77a-4b3d-87d2-74ab94c5b939-kube-api-access-x5dbh\") on node \"crc\" DevicePath \"\"" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.085492 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.116216 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "576ca576-b77a-4b3d-87d2-74ab94c5b939" (UID: "576ca576-b77a-4b3d-87d2-74ab94c5b939"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.187767 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576ca576-b77a-4b3d-87d2-74ab94c5b939-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.210693 5099 generic.go:358] "Generic (PLEG): container finished" podID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerID="0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636" exitCode=0 Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.210780 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerDied","Data":"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636"} Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.210878 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hngxj" event={"ID":"576ca576-b77a-4b3d-87d2-74ab94c5b939","Type":"ContainerDied","Data":"1fc82b8fb1401fd3c74d3c3ae0b9279dd4816759b8157e9c5f90256e6f0b4e2c"} Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.210900 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hngxj" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.210925 5099 scope.go:117] "RemoveContainer" containerID="0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.240508 5099 scope.go:117] "RemoveContainer" containerID="917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.265908 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.272909 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hngxj"] Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.290453 5099 scope.go:117] "RemoveContainer" containerID="aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.316709 5099 scope.go:117] "RemoveContainer" containerID="0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636" Jan 21 18:56:42 crc kubenswrapper[5099]: E0121 18:56:42.317481 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636\": container with ID starting with 0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636 not found: ID does not exist" containerID="0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.317535 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636"} err="failed to get container status \"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636\": rpc error: code = NotFound desc = could not find container \"0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636\": container with ID starting with 0c75d226c65ee32e9948bc97dea3833a4f5d238cf43b103a7695478cbf977636 not found: ID does not exist" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.317575 5099 scope.go:117] "RemoveContainer" containerID="917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41" Jan 21 18:56:42 crc kubenswrapper[5099]: E0121 18:56:42.318189 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41\": container with ID starting with 917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41 not found: ID does not exist" containerID="917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.318279 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41"} err="failed to get container status \"917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41\": rpc error: code = NotFound desc = could not find container \"917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41\": container with ID starting with 917c5b9ceb5a667411316b7907944c3260f2d036212f83f8cb0c0ff475640c41 not found: ID does not exist" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.318302 5099 scope.go:117] "RemoveContainer" containerID="aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8" Jan 21 18:56:42 crc kubenswrapper[5099]: E0121 18:56:42.318887 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8\": container with ID starting with aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8 not found: ID does not exist" containerID="aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8" Jan 21 18:56:42 crc kubenswrapper[5099]: I0121 18:56:42.318929 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8"} err="failed to get container status \"aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8\": rpc error: code = NotFound desc = could not find container \"aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8\": container with ID starting with aec6c416e05bd98f478a2f739e2be535f6c9af8f28e9e634332089017ccfb6d8 not found: ID does not exist" Jan 21 18:56:43 crc kubenswrapper[5099]: I0121 18:56:43.923784 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" path="/var/lib/kubelet/pods/576ca576-b77a-4b3d-87d2-74ab94c5b939/volumes" Jan 21 18:56:49 crc kubenswrapper[5099]: I0121 18:56:49.914227 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:56:49 crc kubenswrapper[5099]: E0121 18:56:49.915351 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:57:04 crc kubenswrapper[5099]: I0121 18:57:04.913877 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:57:04 crc kubenswrapper[5099]: E0121 18:57:04.914865 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:57:17 crc kubenswrapper[5099]: I0121 18:57:17.914803 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:57:17 crc kubenswrapper[5099]: E0121 18:57:17.916493 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:57:28 crc kubenswrapper[5099]: I0121 18:57:28.913450 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:57:28 crc kubenswrapper[5099]: E0121 18:57:28.914371 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:57:40 crc kubenswrapper[5099]: I0121 18:57:40.914681 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:57:40 crc kubenswrapper[5099]: E0121 18:57:40.916152 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 18:57:52 crc kubenswrapper[5099]: I0121 18:57:52.914628 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 18:57:53 crc kubenswrapper[5099]: I0121 18:57:53.970495 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa"} Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.155146 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483698-r9sgw"] Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.156943 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="extract-utilities" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.156965 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="extract-utilities" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.157074 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="extract-content" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.157086 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="extract-content" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.157106 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="registry-server" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.157116 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="registry-server" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.157327 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="576ca576-b77a-4b3d-87d2-74ab94c5b939" containerName="registry-server" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.172130 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.209152 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2nn\" (UniqueName: \"kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn\") pod \"auto-csr-approver-29483698-r9sgw\" (UID: \"dbd324cb-b624-4bd5-bcdc-335e5caeabe6\") " pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.211097 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.211428 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.211467 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.211461 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483698-r9sgw"] Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.310937 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2nn\" (UniqueName: \"kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn\") pod \"auto-csr-approver-29483698-r9sgw\" (UID: \"dbd324cb-b624-4bd5-bcdc-335e5caeabe6\") " pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.332494 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2nn\" (UniqueName: \"kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn\") pod \"auto-csr-approver-29483698-r9sgw\" (UID: \"dbd324cb-b624-4bd5-bcdc-335e5caeabe6\") " pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.532424 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:00 crc kubenswrapper[5099]: I0121 18:58:00.819936 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483698-r9sgw"] Jan 21 18:58:01 crc kubenswrapper[5099]: I0121 18:58:01.039496 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" event={"ID":"dbd324cb-b624-4bd5-bcdc-335e5caeabe6","Type":"ContainerStarted","Data":"cc6449f4662285d1a9c26a9a6eda06d9385002c9bb08a2f55f04e103f26e57fd"} Jan 21 18:58:03 crc kubenswrapper[5099]: I0121 18:58:03.059712 5099 generic.go:358] "Generic (PLEG): container finished" podID="dbd324cb-b624-4bd5-bcdc-335e5caeabe6" containerID="1be30e73a4a59966cd6b182159aeb183a6f9d21f965a66742f7b0b35d580a8a5" exitCode=0 Jan 21 18:58:03 crc kubenswrapper[5099]: I0121 18:58:03.060292 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" event={"ID":"dbd324cb-b624-4bd5-bcdc-335e5caeabe6","Type":"ContainerDied","Data":"1be30e73a4a59966cd6b182159aeb183a6f9d21f965a66742f7b0b35d580a8a5"} Jan 21 18:58:04 crc kubenswrapper[5099]: I0121 18:58:04.352090 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:04 crc kubenswrapper[5099]: I0121 18:58:04.392148 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh2nn\" (UniqueName: \"kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn\") pod \"dbd324cb-b624-4bd5-bcdc-335e5caeabe6\" (UID: \"dbd324cb-b624-4bd5-bcdc-335e5caeabe6\") " Jan 21 18:58:04 crc kubenswrapper[5099]: I0121 18:58:04.400060 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn" (OuterVolumeSpecName: "kube-api-access-fh2nn") pod "dbd324cb-b624-4bd5-bcdc-335e5caeabe6" (UID: "dbd324cb-b624-4bd5-bcdc-335e5caeabe6"). InnerVolumeSpecName "kube-api-access-fh2nn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 18:58:04 crc kubenswrapper[5099]: I0121 18:58:04.494207 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fh2nn\" (UniqueName: \"kubernetes.io/projected/dbd324cb-b624-4bd5-bcdc-335e5caeabe6-kube-api-access-fh2nn\") on node \"crc\" DevicePath \"\"" Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.080598 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" event={"ID":"dbd324cb-b624-4bd5-bcdc-335e5caeabe6","Type":"ContainerDied","Data":"cc6449f4662285d1a9c26a9a6eda06d9385002c9bb08a2f55f04e103f26e57fd"} Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.080649 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6449f4662285d1a9c26a9a6eda06d9385002c9bb08a2f55f04e103f26e57fd" Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.080667 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483698-r9sgw" Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.444781 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483692-6j7vw"] Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.456019 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483692-6j7vw"] Jan 21 18:58:05 crc kubenswrapper[5099]: I0121 18:58:05.928473 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9efd42e-0a46-48f5-bab3-a297dc87775e" path="/var/lib/kubelet/pods/a9efd42e-0a46-48f5-bab3-a297dc87775e/volumes" Jan 21 18:58:19 crc kubenswrapper[5099]: I0121 18:58:19.415414 5099 scope.go:117] "RemoveContainer" containerID="46e6c69e7fd8401f9ae09e1728b9910685b5099961a82801aeee29443174c59f" Jan 21 18:59:05 crc kubenswrapper[5099]: I0121 18:59:05.122475 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:59:05 crc kubenswrapper[5099]: I0121 18:59:05.126039 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 18:59:05 crc kubenswrapper[5099]: I0121 18:59:05.132660 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 18:59:05 crc kubenswrapper[5099]: I0121 18:59:05.134988 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.142863 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483700-rhmqg"] Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.145517 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbd324cb-b624-4bd5-bcdc-335e5caeabe6" containerName="oc" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.145644 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd324cb-b624-4bd5-bcdc-335e5caeabe6" containerName="oc" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.145982 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dbd324cb-b624-4bd5-bcdc-335e5caeabe6" containerName="oc" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.152551 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.164213 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk"] Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.170369 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.170473 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.172884 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.173518 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.174137 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk"] Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.176612 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.176781 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.194220 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483700-rhmqg"] Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.288371 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.288443 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.288486 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxcr\" (UniqueName: \"kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr\") pod \"auto-csr-approver-29483700-rhmqg\" (UID: \"9f6c6d06-df0e-475a-8008-8338129bc609\") " pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.288633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msm7\" (UniqueName: \"kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.390576 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9msm7\" (UniqueName: \"kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.390987 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.391131 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.391253 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxcr\" (UniqueName: \"kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr\") pod \"auto-csr-approver-29483700-rhmqg\" (UID: \"9f6c6d06-df0e-475a-8008-8338129bc609\") " pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.392109 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.404552 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.427223 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msm7\" (UniqueName: \"kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7\") pod \"collect-profiles-29483700-vxfjk\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.429438 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxcr\" (UniqueName: \"kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr\") pod \"auto-csr-approver-29483700-rhmqg\" (UID: \"9f6c6d06-df0e-475a-8008-8338129bc609\") " pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.486515 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.499551 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.791754 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483700-rhmqg"] Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.798411 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:00:00 crc kubenswrapper[5099]: I0121 19:00:00.967201 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk"] Jan 21 19:00:00 crc kubenswrapper[5099]: W0121 19:00:00.976960 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod918ee396_65c2_4421_959d_f245f87b6269.slice/crio-8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e WatchSource:0}: Error finding container 8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e: Status 404 returned error can't find the container with id 8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e Jan 21 19:00:01 crc kubenswrapper[5099]: I0121 19:00:01.295024 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" event={"ID":"918ee396-65c2-4421-959d-f245f87b6269","Type":"ContainerStarted","Data":"dbcdaae3a155d08d9973fa41567880005648879a40f5ff0ab7892c324051af41"} Jan 21 19:00:01 crc kubenswrapper[5099]: I0121 19:00:01.295346 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" event={"ID":"918ee396-65c2-4421-959d-f245f87b6269","Type":"ContainerStarted","Data":"8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e"} Jan 21 19:00:01 crc kubenswrapper[5099]: I0121 19:00:01.296464 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" event={"ID":"9f6c6d06-df0e-475a-8008-8338129bc609","Type":"ContainerStarted","Data":"9cd793a9da2f191331aba50310d99093f53aba34e64eae8a818bd3c0a4e0972b"} Jan 21 19:00:01 crc kubenswrapper[5099]: I0121 19:00:01.316132 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" podStartSLOduration=1.31610654 podStartE2EDuration="1.31610654s" podCreationTimestamp="2026-01-21 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 19:00:01.312059481 +0000 UTC m=+2758.726021942" watchObservedRunningTime="2026-01-21 19:00:01.31610654 +0000 UTC m=+2758.730069001" Jan 21 19:00:02 crc kubenswrapper[5099]: I0121 19:00:02.308098 5099 generic.go:358] "Generic (PLEG): container finished" podID="918ee396-65c2-4421-959d-f245f87b6269" containerID="dbcdaae3a155d08d9973fa41567880005648879a40f5ff0ab7892c324051af41" exitCode=0 Jan 21 19:00:02 crc kubenswrapper[5099]: I0121 19:00:02.308173 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" event={"ID":"918ee396-65c2-4421-959d-f245f87b6269","Type":"ContainerDied","Data":"dbcdaae3a155d08d9973fa41567880005648879a40f5ff0ab7892c324051af41"} Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.585367 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.746805 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume\") pod \"918ee396-65c2-4421-959d-f245f87b6269\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.747143 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume\") pod \"918ee396-65c2-4421-959d-f245f87b6269\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.747204 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msm7\" (UniqueName: \"kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7\") pod \"918ee396-65c2-4421-959d-f245f87b6269\" (UID: \"918ee396-65c2-4421-959d-f245f87b6269\") " Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.748461 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume" (OuterVolumeSpecName: "config-volume") pod "918ee396-65c2-4421-959d-f245f87b6269" (UID: "918ee396-65c2-4421-959d-f245f87b6269"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.752942 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7" (OuterVolumeSpecName: "kube-api-access-9msm7") pod "918ee396-65c2-4421-959d-f245f87b6269" (UID: "918ee396-65c2-4421-959d-f245f87b6269"). InnerVolumeSpecName "kube-api-access-9msm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.753330 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "918ee396-65c2-4421-959d-f245f87b6269" (UID: "918ee396-65c2-4421-959d-f245f87b6269"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.849865 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/918ee396-65c2-4421-959d-f245f87b6269-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.850047 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9msm7\" (UniqueName: \"kubernetes.io/projected/918ee396-65c2-4421-959d-f245f87b6269-kube-api-access-9msm7\") on node \"crc\" DevicePath \"\"" Jan 21 19:00:03 crc kubenswrapper[5099]: I0121 19:00:03.850066 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918ee396-65c2-4421-959d-f245f87b6269-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:00:04 crc kubenswrapper[5099]: I0121 19:00:04.329287 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" event={"ID":"918ee396-65c2-4421-959d-f245f87b6269","Type":"ContainerDied","Data":"8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e"} Jan 21 19:00:04 crc kubenswrapper[5099]: I0121 19:00:04.329359 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8423c4a1842f0e71e0cd598c9cda9c03d255bae498a45378247f0e45b284d06e" Jan 21 19:00:04 crc kubenswrapper[5099]: I0121 19:00:04.329527 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483700-vxfjk" Jan 21 19:00:04 crc kubenswrapper[5099]: I0121 19:00:04.387684 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt"] Jan 21 19:00:04 crc kubenswrapper[5099]: I0121 19:00:04.397105 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483655-48djt"] Jan 21 19:00:05 crc kubenswrapper[5099]: I0121 19:00:05.922772 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e481c5-0ad1-4c76-bf43-a32b82b763c7" path="/var/lib/kubelet/pods/05e481c5-0ad1-4c76-bf43-a32b82b763c7/volumes" Jan 21 19:00:14 crc kubenswrapper[5099]: I0121 19:00:14.420206 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f6c6d06-df0e-475a-8008-8338129bc609" containerID="8097979396b4c5be40c7bc738dc73ec47ce4542381d833c20a8b770a7ea91d7e" exitCode=0 Jan 21 19:00:14 crc kubenswrapper[5099]: I0121 19:00:14.420356 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" event={"ID":"9f6c6d06-df0e-475a-8008-8338129bc609","Type":"ContainerDied","Data":"8097979396b4c5be40c7bc738dc73ec47ce4542381d833c20a8b770a7ea91d7e"} Jan 21 19:00:15 crc kubenswrapper[5099]: I0121 19:00:15.735412 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:15 crc kubenswrapper[5099]: I0121 19:00:15.858722 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffxcr\" (UniqueName: \"kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr\") pod \"9f6c6d06-df0e-475a-8008-8338129bc609\" (UID: \"9f6c6d06-df0e-475a-8008-8338129bc609\") " Jan 21 19:00:15 crc kubenswrapper[5099]: I0121 19:00:15.873022 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr" (OuterVolumeSpecName: "kube-api-access-ffxcr") pod "9f6c6d06-df0e-475a-8008-8338129bc609" (UID: "9f6c6d06-df0e-475a-8008-8338129bc609"). InnerVolumeSpecName "kube-api-access-ffxcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:00:15 crc kubenswrapper[5099]: I0121 19:00:15.961133 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffxcr\" (UniqueName: \"kubernetes.io/projected/9f6c6d06-df0e-475a-8008-8338129bc609-kube-api-access-ffxcr\") on node \"crc\" DevicePath \"\"" Jan 21 19:00:16 crc kubenswrapper[5099]: I0121 19:00:16.448068 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" Jan 21 19:00:16 crc kubenswrapper[5099]: I0121 19:00:16.448103 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483700-rhmqg" event={"ID":"9f6c6d06-df0e-475a-8008-8338129bc609","Type":"ContainerDied","Data":"9cd793a9da2f191331aba50310d99093f53aba34e64eae8a818bd3c0a4e0972b"} Jan 21 19:00:16 crc kubenswrapper[5099]: I0121 19:00:16.449385 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd793a9da2f191331aba50310d99093f53aba34e64eae8a818bd3c0a4e0972b" Jan 21 19:00:16 crc kubenswrapper[5099]: I0121 19:00:16.811247 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483694-5bjtk"] Jan 21 19:00:16 crc kubenswrapper[5099]: I0121 19:00:16.819514 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483694-5bjtk"] Jan 21 19:00:17 crc kubenswrapper[5099]: I0121 19:00:17.925820 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142fcda0-4913-4ba4-8205-ed544501bdc5" path="/var/lib/kubelet/pods/142fcda0-4913-4ba4-8205-ed544501bdc5/volumes" Jan 21 19:00:19 crc kubenswrapper[5099]: I0121 19:00:19.606987 5099 scope.go:117] "RemoveContainer" containerID="edbdca7359202df10dbfb1a7035d4ee71d12483c1f95545f023e290fbc5d866a" Jan 21 19:00:19 crc kubenswrapper[5099]: I0121 19:00:19.723181 5099 scope.go:117] "RemoveContainer" containerID="462e144af1c83a94a48d6dab7d1525a6ce5af7900773a8993d3a5ed757c3fc9e" Jan 21 19:00:22 crc kubenswrapper[5099]: I0121 19:00:22.064901 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:00:22 crc kubenswrapper[5099]: I0121 19:00:22.065482 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:00:52 crc kubenswrapper[5099]: I0121 19:00:52.064298 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:00:52 crc kubenswrapper[5099]: I0121 19:00:52.066653 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:01:22 crc kubenswrapper[5099]: I0121 19:01:22.065274 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:01:22 crc kubenswrapper[5099]: I0121 19:01:22.066043 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:01:22 crc kubenswrapper[5099]: I0121 19:01:22.066116 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:01:22 crc kubenswrapper[5099]: I0121 19:01:22.067029 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:01:22 crc kubenswrapper[5099]: I0121 19:01:22.067111 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa" gracePeriod=600 Jan 21 19:01:23 crc kubenswrapper[5099]: I0121 19:01:23.064534 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa" exitCode=0 Jan 21 19:01:23 crc kubenswrapper[5099]: I0121 19:01:23.064669 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa"} Jan 21 19:01:23 crc kubenswrapper[5099]: I0121 19:01:23.065261 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df"} Jan 21 19:01:23 crc kubenswrapper[5099]: I0121 19:01:23.065313 5099 scope.go:117] "RemoveContainer" containerID="e5ecb8c8febc910222a2599d2113d1f6c3c8b860a16ca2711496a4459929451f" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.147055 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483702-k45db"] Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.150949 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f6c6d06-df0e-475a-8008-8338129bc609" containerName="oc" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.151174 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6c6d06-df0e-475a-8008-8338129bc609" containerName="oc" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.151532 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="918ee396-65c2-4421-959d-f245f87b6269" containerName="collect-profiles" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.152274 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="918ee396-65c2-4421-959d-f245f87b6269" containerName="collect-profiles" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.152978 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f6c6d06-df0e-475a-8008-8338129bc609" containerName="oc" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.153326 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="918ee396-65c2-4421-959d-f245f87b6269" containerName="collect-profiles" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.161080 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.164750 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.165148 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.165220 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483702-k45db"] Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.165453 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.320857 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft74\" (UniqueName: \"kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74\") pod \"auto-csr-approver-29483702-k45db\" (UID: \"3449fc40-0754-49dc-9823-78990024e365\") " pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.422441 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4ft74\" (UniqueName: \"kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74\") pod \"auto-csr-approver-29483702-k45db\" (UID: \"3449fc40-0754-49dc-9823-78990024e365\") " pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.446912 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ft74\" (UniqueName: \"kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74\") pod \"auto-csr-approver-29483702-k45db\" (UID: \"3449fc40-0754-49dc-9823-78990024e365\") " pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.496972 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:00 crc kubenswrapper[5099]: I0121 19:02:00.740459 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483702-k45db"] Jan 21 19:02:00 crc kubenswrapper[5099]: W0121 19:02:00.748933 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3449fc40_0754_49dc_9823_78990024e365.slice/crio-d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11 WatchSource:0}: Error finding container d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11: Status 404 returned error can't find the container with id d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11 Jan 21 19:02:01 crc kubenswrapper[5099]: I0121 19:02:01.480273 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483702-k45db" event={"ID":"3449fc40-0754-49dc-9823-78990024e365","Type":"ContainerStarted","Data":"d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11"} Jan 21 19:02:02 crc kubenswrapper[5099]: I0121 19:02:02.488298 5099 generic.go:358] "Generic (PLEG): container finished" podID="3449fc40-0754-49dc-9823-78990024e365" containerID="d302acce9f1d1e3815e34a8e84a347f91a0ac018ab8da3cb42bb75fb508c7ff5" exitCode=0 Jan 21 19:02:02 crc kubenswrapper[5099]: I0121 19:02:02.488879 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483702-k45db" event={"ID":"3449fc40-0754-49dc-9823-78990024e365","Type":"ContainerDied","Data":"d302acce9f1d1e3815e34a8e84a347f91a0ac018ab8da3cb42bb75fb508c7ff5"} Jan 21 19:02:03 crc kubenswrapper[5099]: I0121 19:02:03.795960 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:03 crc kubenswrapper[5099]: I0121 19:02:03.878799 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ft74\" (UniqueName: \"kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74\") pod \"3449fc40-0754-49dc-9823-78990024e365\" (UID: \"3449fc40-0754-49dc-9823-78990024e365\") " Jan 21 19:02:03 crc kubenswrapper[5099]: I0121 19:02:03.891335 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74" (OuterVolumeSpecName: "kube-api-access-4ft74") pod "3449fc40-0754-49dc-9823-78990024e365" (UID: "3449fc40-0754-49dc-9823-78990024e365"). InnerVolumeSpecName "kube-api-access-4ft74". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:02:03 crc kubenswrapper[5099]: I0121 19:02:03.984260 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4ft74\" (UniqueName: \"kubernetes.io/projected/3449fc40-0754-49dc-9823-78990024e365-kube-api-access-4ft74\") on node \"crc\" DevicePath \"\"" Jan 21 19:02:04 crc kubenswrapper[5099]: I0121 19:02:04.511141 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483702-k45db" event={"ID":"3449fc40-0754-49dc-9823-78990024e365","Type":"ContainerDied","Data":"d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11"} Jan 21 19:02:04 crc kubenswrapper[5099]: I0121 19:02:04.511697 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d266ee880af6bf4cdb3fc5a320d6b510290e85cdfd3981fe8af7b7be5607bb11" Jan 21 19:02:04 crc kubenswrapper[5099]: I0121 19:02:04.511280 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483702-k45db" Jan 21 19:02:04 crc kubenswrapper[5099]: I0121 19:02:04.892623 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483696-4m4xz"] Jan 21 19:02:04 crc kubenswrapper[5099]: I0121 19:02:04.898701 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483696-4m4xz"] Jan 21 19:02:05 crc kubenswrapper[5099]: I0121 19:02:05.926875 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bf5cc4-83af-4c17-a2fa-b8de56012d23" path="/var/lib/kubelet/pods/58bf5cc4-83af-4c17-a2fa-b8de56012d23/volumes" Jan 21 19:02:19 crc kubenswrapper[5099]: I0121 19:02:19.804578 5099 scope.go:117] "RemoveContainer" containerID="a2ba036834e456bae512f9ae6a5afda9f27f0ad6ac1916616dc08dbe3eb944fc" Jan 21 19:03:22 crc kubenswrapper[5099]: I0121 19:03:22.065094 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:03:22 crc kubenswrapper[5099]: I0121 19:03:22.066703 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:03:52 crc kubenswrapper[5099]: I0121 19:03:52.065478 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:03:52 crc kubenswrapper[5099]: I0121 19:03:52.066400 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.149748 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483704-lskp6"] Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.151437 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3449fc40-0754-49dc-9823-78990024e365" containerName="oc" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.151458 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449fc40-0754-49dc-9823-78990024e365" containerName="oc" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.151599 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3449fc40-0754-49dc-9823-78990024e365" containerName="oc" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.159097 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483704-lskp6"] Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.159371 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.164808 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.165068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.165273 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.211484 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgh4\" (UniqueName: \"kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4\") pod \"auto-csr-approver-29483704-lskp6\" (UID: \"15f8150b-1214-4fe4-861a-3b4a9b9bd987\") " pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.314017 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgh4\" (UniqueName: \"kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4\") pod \"auto-csr-approver-29483704-lskp6\" (UID: \"15f8150b-1214-4fe4-861a-3b4a9b9bd987\") " pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.337796 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgh4\" (UniqueName: \"kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4\") pod \"auto-csr-approver-29483704-lskp6\" (UID: \"15f8150b-1214-4fe4-861a-3b4a9b9bd987\") " pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.487828 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.745571 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483704-lskp6"] Jan 21 19:04:00 crc kubenswrapper[5099]: I0121 19:04:00.878265 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483704-lskp6" event={"ID":"15f8150b-1214-4fe4-861a-3b4a9b9bd987","Type":"ContainerStarted","Data":"c11d3bbca95ef5b130c4670097a59f3c6a01bfe10403573a2b5ac1c39f8b7b6f"} Jan 21 19:04:02 crc kubenswrapper[5099]: I0121 19:04:02.896150 5099 generic.go:358] "Generic (PLEG): container finished" podID="15f8150b-1214-4fe4-861a-3b4a9b9bd987" containerID="cd1b01b1608367a6c2c2c6d2bcc34646c61a1712317893cfc05dbedd13e393a4" exitCode=0 Jan 21 19:04:02 crc kubenswrapper[5099]: I0121 19:04:02.896321 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483704-lskp6" event={"ID":"15f8150b-1214-4fe4-861a-3b4a9b9bd987","Type":"ContainerDied","Data":"cd1b01b1608367a6c2c2c6d2bcc34646c61a1712317893cfc05dbedd13e393a4"} Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.198638 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.294044 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwgh4\" (UniqueName: \"kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4\") pod \"15f8150b-1214-4fe4-861a-3b4a9b9bd987\" (UID: \"15f8150b-1214-4fe4-861a-3b4a9b9bd987\") " Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.301617 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4" (OuterVolumeSpecName: "kube-api-access-xwgh4") pod "15f8150b-1214-4fe4-861a-3b4a9b9bd987" (UID: "15f8150b-1214-4fe4-861a-3b4a9b9bd987"). InnerVolumeSpecName "kube-api-access-xwgh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.395843 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwgh4\" (UniqueName: \"kubernetes.io/projected/15f8150b-1214-4fe4-861a-3b4a9b9bd987-kube-api-access-xwgh4\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.918856 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483704-lskp6" Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.918853 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483704-lskp6" event={"ID":"15f8150b-1214-4fe4-861a-3b4a9b9bd987","Type":"ContainerDied","Data":"c11d3bbca95ef5b130c4670097a59f3c6a01bfe10403573a2b5ac1c39f8b7b6f"} Jan 21 19:04:04 crc kubenswrapper[5099]: I0121 19:04:04.919023 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11d3bbca95ef5b130c4670097a59f3c6a01bfe10403573a2b5ac1c39f8b7b6f" Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.287656 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483698-r9sgw"] Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.295353 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483698-r9sgw"] Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.301916 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.304041 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.309892 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.312709 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:04:05 crc kubenswrapper[5099]: I0121 19:04:05.926263 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd324cb-b624-4bd5-bcdc-335e5caeabe6" path="/var/lib/kubelet/pods/dbd324cb-b624-4bd5-bcdc-335e5caeabe6/volumes" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.082119 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.084789 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15f8150b-1214-4fe4-861a-3b4a9b9bd987" containerName="oc" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.084812 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f8150b-1214-4fe4-861a-3b4a9b9bd987" containerName="oc" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.084977 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="15f8150b-1214-4fe4-861a-3b4a9b9bd987" containerName="oc" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.089788 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.105085 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.202097 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.202175 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vb5p\" (UniqueName: \"kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.202227 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.304223 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.304296 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vb5p\" (UniqueName: \"kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.304439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.305085 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.305080 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.330634 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vb5p\" (UniqueName: \"kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p\") pod \"community-operators-bn467\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.408758 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:14 crc kubenswrapper[5099]: I0121 19:04:14.709639 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:15 crc kubenswrapper[5099]: I0121 19:04:15.015821 5099 generic.go:358] "Generic (PLEG): container finished" podID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerID="8fb05498425b67f7971c29bc04ed755fa045ebc6ec05edfe05bd11895a6499a7" exitCode=0 Jan 21 19:04:15 crc kubenswrapper[5099]: I0121 19:04:15.016598 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerDied","Data":"8fb05498425b67f7971c29bc04ed755fa045ebc6ec05edfe05bd11895a6499a7"} Jan 21 19:04:15 crc kubenswrapper[5099]: I0121 19:04:15.016655 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerStarted","Data":"4a28b6d303d9b5c6a6aca16f27e64b0f23a6bf61d6190f533ef8f7ea1713807e"} Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.026989 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerStarted","Data":"ed12cc7f78886e5f0ed3e4093be4560685a52def346afb5b681fe28ce8c4d849"} Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.284466 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhw7m"] Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.328085 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.343995 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhw7m"] Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.437530 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-utilities\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.437593 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7pb\" (UniqueName: \"kubernetes.io/projected/49fa2683-7f9f-4152-8cc3-238620fa6630-kube-api-access-6x7pb\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.437671 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-catalog-content\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.539387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-catalog-content\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.539496 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-utilities\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.539543 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7pb\" (UniqueName: \"kubernetes.io/projected/49fa2683-7f9f-4152-8cc3-238620fa6630-kube-api-access-6x7pb\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.540015 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-catalog-content\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.540093 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49fa2683-7f9f-4152-8cc3-238620fa6630-utilities\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.572909 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7pb\" (UniqueName: \"kubernetes.io/projected/49fa2683-7f9f-4152-8cc3-238620fa6630-kube-api-access-6x7pb\") pod \"certified-operators-mhw7m\" (UID: \"49fa2683-7f9f-4152-8cc3-238620fa6630\") " pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:16 crc kubenswrapper[5099]: I0121 19:04:16.791961 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:17 crc kubenswrapper[5099]: I0121 19:04:17.039326 5099 generic.go:358] "Generic (PLEG): container finished" podID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerID="ed12cc7f78886e5f0ed3e4093be4560685a52def346afb5b681fe28ce8c4d849" exitCode=0 Jan 21 19:04:17 crc kubenswrapper[5099]: I0121 19:04:17.039500 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerDied","Data":"ed12cc7f78886e5f0ed3e4093be4560685a52def346afb5b681fe28ce8c4d849"} Jan 21 19:04:17 crc kubenswrapper[5099]: I0121 19:04:17.322226 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhw7m"] Jan 21 19:04:17 crc kubenswrapper[5099]: W0121 19:04:17.330814 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fa2683_7f9f_4152_8cc3_238620fa6630.slice/crio-871c527d349ddf1a8c0739edeab3f685182dc9d19d54c20adec325a755e9b366 WatchSource:0}: Error finding container 871c527d349ddf1a8c0739edeab3f685182dc9d19d54c20adec325a755e9b366: Status 404 returned error can't find the container with id 871c527d349ddf1a8c0739edeab3f685182dc9d19d54c20adec325a755e9b366 Jan 21 19:04:18 crc kubenswrapper[5099]: I0121 19:04:18.065347 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerStarted","Data":"b55a37c50942c735154ba0d383d3c5843a0691aab1485a8247b9f694113a28fb"} Jan 21 19:04:18 crc kubenswrapper[5099]: I0121 19:04:18.068032 5099 generic.go:358] "Generic (PLEG): container finished" podID="49fa2683-7f9f-4152-8cc3-238620fa6630" containerID="30e3866f50fe7003f8f216bfa418887d6695b99a491386009871a7131210902c" exitCode=0 Jan 21 19:04:18 crc kubenswrapper[5099]: I0121 19:04:18.068118 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw7m" event={"ID":"49fa2683-7f9f-4152-8cc3-238620fa6630","Type":"ContainerDied","Data":"30e3866f50fe7003f8f216bfa418887d6695b99a491386009871a7131210902c"} Jan 21 19:04:18 crc kubenswrapper[5099]: I0121 19:04:18.068158 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw7m" event={"ID":"49fa2683-7f9f-4152-8cc3-238620fa6630","Type":"ContainerStarted","Data":"871c527d349ddf1a8c0739edeab3f685182dc9d19d54c20adec325a755e9b366"} Jan 21 19:04:18 crc kubenswrapper[5099]: I0121 19:04:18.096048 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bn467" podStartSLOduration=3.566264737 podStartE2EDuration="4.096021094s" podCreationTimestamp="2026-01-21 19:04:14 +0000 UTC" firstStartedPulling="2026-01-21 19:04:15.016880726 +0000 UTC m=+3012.430843187" lastFinishedPulling="2026-01-21 19:04:15.546637083 +0000 UTC m=+3012.960599544" observedRunningTime="2026-01-21 19:04:18.094821055 +0000 UTC m=+3015.508783536" watchObservedRunningTime="2026-01-21 19:04:18.096021094 +0000 UTC m=+3015.509983565" Jan 21 19:04:20 crc kubenswrapper[5099]: I0121 19:04:20.005590 5099 scope.go:117] "RemoveContainer" containerID="1be30e73a4a59966cd6b182159aeb183a6f9d21f965a66742f7b0b35d580a8a5" Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.065382 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.065633 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.065675 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.066307 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.066360 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" gracePeriod=600 Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.167432 5099 generic.go:358] "Generic (PLEG): container finished" podID="49fa2683-7f9f-4152-8cc3-238620fa6630" containerID="8f1a9fb87a5d86e22310a7d3808cdabfac47dde43484e470daf8b5a3807394cb" exitCode=0 Jan 21 19:04:22 crc kubenswrapper[5099]: I0121 19:04:22.167687 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw7m" event={"ID":"49fa2683-7f9f-4152-8cc3-238620fa6630","Type":"ContainerDied","Data":"8f1a9fb87a5d86e22310a7d3808cdabfac47dde43484e470daf8b5a3807394cb"} Jan 21 19:04:22 crc kubenswrapper[5099]: E0121 19:04:22.285092 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.177699 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw7m" event={"ID":"49fa2683-7f9f-4152-8cc3-238620fa6630","Type":"ContainerStarted","Data":"8501086c58fd6f5d9059ab0d425d2fb6e35fdcdc053185719046d5c4a9f9f531"} Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.180685 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" exitCode=0 Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.181007 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df"} Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.181037 5099 scope.go:117] "RemoveContainer" containerID="19f284b3397f38ecead8f041287c5ab09dae33e60d991a25139de1f67cebf1aa" Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.181359 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:04:23 crc kubenswrapper[5099]: E0121 19:04:23.181560 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:04:23 crc kubenswrapper[5099]: I0121 19:04:23.207817 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhw7m" podStartSLOduration=4.004340442 podStartE2EDuration="7.207796214s" podCreationTimestamp="2026-01-21 19:04:16 +0000 UTC" firstStartedPulling="2026-01-21 19:04:18.073177067 +0000 UTC m=+3015.487139568" lastFinishedPulling="2026-01-21 19:04:21.276632879 +0000 UTC m=+3018.690595340" observedRunningTime="2026-01-21 19:04:23.20230242 +0000 UTC m=+3020.616264901" watchObservedRunningTime="2026-01-21 19:04:23.207796214 +0000 UTC m=+3020.621758675" Jan 21 19:04:24 crc kubenswrapper[5099]: I0121 19:04:24.409609 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:24 crc kubenswrapper[5099]: I0121 19:04:24.410079 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:24 crc kubenswrapper[5099]: I0121 19:04:24.461322 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:25 crc kubenswrapper[5099]: I0121 19:04:25.276756 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:25 crc kubenswrapper[5099]: I0121 19:04:25.476006 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:26 crc kubenswrapper[5099]: I0121 19:04:26.793862 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:26 crc kubenswrapper[5099]: I0121 19:04:26.794280 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:26 crc kubenswrapper[5099]: I0121 19:04:26.849403 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:27 crc kubenswrapper[5099]: I0121 19:04:27.220102 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bn467" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="registry-server" containerID="cri-o://b55a37c50942c735154ba0d383d3c5843a0691aab1485a8247b9f694113a28fb" gracePeriod=2 Jan 21 19:04:27 crc kubenswrapper[5099]: I0121 19:04:27.267519 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhw7m" Jan 21 19:04:27 crc kubenswrapper[5099]: I0121 19:04:27.692132 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhw7m"] Jan 21 19:04:27 crc kubenswrapper[5099]: I0121 19:04:27.872695 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 19:04:27 crc kubenswrapper[5099]: I0121 19:04:27.873057 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f58tc" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="registry-server" containerID="cri-o://1c00e101619cb71fb86c534cc2b6150961cf99d0c4ad9d4317f21c10d755e903" gracePeriod=2 Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.256629 5099 generic.go:358] "Generic (PLEG): container finished" podID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerID="b55a37c50942c735154ba0d383d3c5843a0691aab1485a8247b9f694113a28fb" exitCode=0 Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.256857 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerDied","Data":"b55a37c50942c735154ba0d383d3c5843a0691aab1485a8247b9f694113a28fb"} Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.256892 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn467" event={"ID":"af27e640-6535-4b7f-a1c7-a7315332d7de","Type":"ContainerDied","Data":"4a28b6d303d9b5c6a6aca16f27e64b0f23a6bf61d6190f533ef8f7ea1713807e"} Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.256904 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a28b6d303d9b5c6a6aca16f27e64b0f23a6bf61d6190f533ef8f7ea1713807e" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.257107 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.280065 5099 generic.go:358] "Generic (PLEG): container finished" podID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerID="1c00e101619cb71fb86c534cc2b6150961cf99d0c4ad9d4317f21c10d755e903" exitCode=0 Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.280176 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerDied","Data":"1c00e101619cb71fb86c534cc2b6150961cf99d0c4ad9d4317f21c10d755e903"} Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.359393 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.379761 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities\") pod \"af27e640-6535-4b7f-a1c7-a7315332d7de\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.379804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content\") pod \"af27e640-6535-4b7f-a1c7-a7315332d7de\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.380041 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vb5p\" (UniqueName: \"kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p\") pod \"af27e640-6535-4b7f-a1c7-a7315332d7de\" (UID: \"af27e640-6535-4b7f-a1c7-a7315332d7de\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.381772 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities" (OuterVolumeSpecName: "utilities") pod "af27e640-6535-4b7f-a1c7-a7315332d7de" (UID: "af27e640-6535-4b7f-a1c7-a7315332d7de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.395512 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p" (OuterVolumeSpecName: "kube-api-access-9vb5p") pod "af27e640-6535-4b7f-a1c7-a7315332d7de" (UID: "af27e640-6535-4b7f-a1c7-a7315332d7de"). InnerVolumeSpecName "kube-api-access-9vb5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.450341 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af27e640-6535-4b7f-a1c7-a7315332d7de" (UID: "af27e640-6535-4b7f-a1c7-a7315332d7de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.481203 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities\") pod \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.481355 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c695x\" (UniqueName: \"kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x\") pod \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.481583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content\") pod \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\" (UID: \"4c1f0429-8f30-4646-aa1b-9913eb49ebfe\") " Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.482798 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities" (OuterVolumeSpecName: "utilities") pod "4c1f0429-8f30-4646-aa1b-9913eb49ebfe" (UID: "4c1f0429-8f30-4646-aa1b-9913eb49ebfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.483153 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vb5p\" (UniqueName: \"kubernetes.io/projected/af27e640-6535-4b7f-a1c7-a7315332d7de-kube-api-access-9vb5p\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.483178 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.483191 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af27e640-6535-4b7f-a1c7-a7315332d7de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.483202 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.487061 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x" (OuterVolumeSpecName: "kube-api-access-c695x") pod "4c1f0429-8f30-4646-aa1b-9913eb49ebfe" (UID: "4c1f0429-8f30-4646-aa1b-9913eb49ebfe"). InnerVolumeSpecName "kube-api-access-c695x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.517621 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c1f0429-8f30-4646-aa1b-9913eb49ebfe" (UID: "4c1f0429-8f30-4646-aa1b-9913eb49ebfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.586313 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:28 crc kubenswrapper[5099]: I0121 19:04:28.586641 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c695x\" (UniqueName: \"kubernetes.io/projected/4c1f0429-8f30-4646-aa1b-9913eb49ebfe-kube-api-access-c695x\") on node \"crc\" DevicePath \"\"" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.293234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f58tc" event={"ID":"4c1f0429-8f30-4646-aa1b-9913eb49ebfe","Type":"ContainerDied","Data":"22f0ac5d4ad56994430cc2ae3c9b5a4e789c92e34f04bf75a51aef9c187b7828"} Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.293336 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f58tc" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.293387 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn467" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.293355 5099 scope.go:117] "RemoveContainer" containerID="1c00e101619cb71fb86c534cc2b6150961cf99d0c4ad9d4317f21c10d755e903" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.325591 5099 scope.go:117] "RemoveContainer" containerID="974f4587151a697d5b26618ab49688b671725068142ab7184093b4e9050a0499" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.342768 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.357851 5099 scope.go:117] "RemoveContainer" containerID="ebec7c7c6354aea243b57bacff79f3de789e8b324329866809660649d922e444" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.360338 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f58tc"] Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.372877 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.381980 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bn467"] Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.922550 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" path="/var/lib/kubelet/pods/4c1f0429-8f30-4646-aa1b-9913eb49ebfe/volumes" Jan 21 19:04:29 crc kubenswrapper[5099]: I0121 19:04:29.923623 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" path="/var/lib/kubelet/pods/af27e640-6535-4b7f-a1c7-a7315332d7de/volumes" Jan 21 19:04:37 crc kubenswrapper[5099]: I0121 19:04:37.914493 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:04:37 crc kubenswrapper[5099]: E0121 19:04:37.915579 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:04:50 crc kubenswrapper[5099]: I0121 19:04:50.914540 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:04:50 crc kubenswrapper[5099]: E0121 19:04:50.916131 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:05:04 crc kubenswrapper[5099]: I0121 19:05:04.913941 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:05:04 crc kubenswrapper[5099]: E0121 19:05:04.915504 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:05:19 crc kubenswrapper[5099]: I0121 19:05:19.914802 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:05:19 crc kubenswrapper[5099]: E0121 19:05:19.916142 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:05:34 crc kubenswrapper[5099]: I0121 19:05:34.915646 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:05:34 crc kubenswrapper[5099]: E0121 19:05:34.916783 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:05:45 crc kubenswrapper[5099]: I0121 19:05:45.916873 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:05:45 crc kubenswrapper[5099]: E0121 19:05:45.917640 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:05:57 crc kubenswrapper[5099]: I0121 19:05:57.913969 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:05:57 crc kubenswrapper[5099]: E0121 19:05:57.917641 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.154769 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483706-fv5sl"] Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156198 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="extract-utilities" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156222 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="extract-utilities" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156245 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156252 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156264 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="extract-utilities" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156271 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="extract-utilities" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156277 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="extract-content" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156282 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="extract-content" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156291 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="extract-content" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156297 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="extract-content" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156326 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156332 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156466 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c1f0429-8f30-4646-aa1b-9913eb49ebfe" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.156479 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="af27e640-6535-4b7f-a1c7-a7315332d7de" containerName="registry-server" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.161365 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.164511 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.165430 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.165725 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.171373 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483706-fv5sl"] Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.202079 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw69c\" (UniqueName: \"kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c\") pod \"auto-csr-approver-29483706-fv5sl\" (UID: \"b3dc683e-1060-4326-bff6-e844746b43ac\") " pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.308425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kw69c\" (UniqueName: \"kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c\") pod \"auto-csr-approver-29483706-fv5sl\" (UID: \"b3dc683e-1060-4326-bff6-e844746b43ac\") " pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.335774 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw69c\" (UniqueName: \"kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c\") pod \"auto-csr-approver-29483706-fv5sl\" (UID: \"b3dc683e-1060-4326-bff6-e844746b43ac\") " pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.483030 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.762832 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483706-fv5sl"] Jan 21 19:06:00 crc kubenswrapper[5099]: I0121 19:06:00.774076 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:06:01 crc kubenswrapper[5099]: I0121 19:06:01.209630 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" event={"ID":"b3dc683e-1060-4326-bff6-e844746b43ac","Type":"ContainerStarted","Data":"d93b0995370992eab3b12237547df17859d89ab70e3ec6a92aaf884fea558ca7"} Jan 21 19:06:03 crc kubenswrapper[5099]: I0121 19:06:03.230990 5099 generic.go:358] "Generic (PLEG): container finished" podID="b3dc683e-1060-4326-bff6-e844746b43ac" containerID="f860b53fe2f78c3ed8e1470bf778648f54bb240867464ff2630e9b29356d96bb" exitCode=0 Jan 21 19:06:03 crc kubenswrapper[5099]: I0121 19:06:03.231264 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" event={"ID":"b3dc683e-1060-4326-bff6-e844746b43ac","Type":"ContainerDied","Data":"f860b53fe2f78c3ed8e1470bf778648f54bb240867464ff2630e9b29356d96bb"} Jan 21 19:06:04 crc kubenswrapper[5099]: I0121 19:06:04.470661 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:04 crc kubenswrapper[5099]: I0121 19:06:04.599975 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw69c\" (UniqueName: \"kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c\") pod \"b3dc683e-1060-4326-bff6-e844746b43ac\" (UID: \"b3dc683e-1060-4326-bff6-e844746b43ac\") " Jan 21 19:06:04 crc kubenswrapper[5099]: I0121 19:06:04.607606 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c" (OuterVolumeSpecName: "kube-api-access-kw69c") pod "b3dc683e-1060-4326-bff6-e844746b43ac" (UID: "b3dc683e-1060-4326-bff6-e844746b43ac"). InnerVolumeSpecName "kube-api-access-kw69c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:06:04 crc kubenswrapper[5099]: I0121 19:06:04.702554 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kw69c\" (UniqueName: \"kubernetes.io/projected/b3dc683e-1060-4326-bff6-e844746b43ac-kube-api-access-kw69c\") on node \"crc\" DevicePath \"\"" Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.252948 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" event={"ID":"b3dc683e-1060-4326-bff6-e844746b43ac","Type":"ContainerDied","Data":"d93b0995370992eab3b12237547df17859d89ab70e3ec6a92aaf884fea558ca7"} Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.253027 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93b0995370992eab3b12237547df17859d89ab70e3ec6a92aaf884fea558ca7" Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.253165 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483706-fv5sl" Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.592742 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483700-rhmqg"] Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.598110 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483700-rhmqg"] Jan 21 19:06:05 crc kubenswrapper[5099]: I0121 19:06:05.932295 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6c6d06-df0e-475a-8008-8338129bc609" path="/var/lib/kubelet/pods/9f6c6d06-df0e-475a-8008-8338129bc609/volumes" Jan 21 19:06:09 crc kubenswrapper[5099]: I0121 19:06:09.914209 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:06:09 crc kubenswrapper[5099]: E0121 19:06:09.915122 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:06:21 crc kubenswrapper[5099]: I0121 19:06:21.314448 5099 scope.go:117] "RemoveContainer" containerID="8097979396b4c5be40c7bc738dc73ec47ce4542381d833c20a8b770a7ea91d7e" Jan 21 19:06:21 crc kubenswrapper[5099]: I0121 19:06:21.917267 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:06:21 crc kubenswrapper[5099]: E0121 19:06:21.918068 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:06:32 crc kubenswrapper[5099]: I0121 19:06:32.914598 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:06:32 crc kubenswrapper[5099]: E0121 19:06:32.917921 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:06:43 crc kubenswrapper[5099]: I0121 19:06:43.932905 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:06:43 crc kubenswrapper[5099]: E0121 19:06:43.934410 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:06:58 crc kubenswrapper[5099]: I0121 19:06:58.914705 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:06:58 crc kubenswrapper[5099]: E0121 19:06:58.916332 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.564415 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.568385 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3dc683e-1060-4326-bff6-e844746b43ac" containerName="oc" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.568434 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dc683e-1060-4326-bff6-e844746b43ac" containerName="oc" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.569303 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3dc683e-1060-4326-bff6-e844746b43ac" containerName="oc" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.581222 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.588671 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.630625 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.630960 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.631124 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxkr\" (UniqueName: \"kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.733639 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.733744 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxkr\" (UniqueName: \"kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.733810 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.734400 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.734542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.766413 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxkr\" (UniqueName: \"kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr\") pod \"redhat-operators-zmv44\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:05 crc kubenswrapper[5099]: I0121 19:07:05.946157 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:06 crc kubenswrapper[5099]: I0121 19:07:06.227823 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:06 crc kubenswrapper[5099]: I0121 19:07:06.873531 5099 generic.go:358] "Generic (PLEG): container finished" podID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerID="1c8bda6b003ef9dbc9d5694f9c34b519007c9dfe548155bdd7347a803ba62321" exitCode=0 Jan 21 19:07:06 crc kubenswrapper[5099]: I0121 19:07:06.873608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerDied","Data":"1c8bda6b003ef9dbc9d5694f9c34b519007c9dfe548155bdd7347a803ba62321"} Jan 21 19:07:06 crc kubenswrapper[5099]: I0121 19:07:06.874115 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerStarted","Data":"49a4da1d580e6793a36e5ce97e4300fa1e6673300ee4f93f7025a41915b6dcad"} Jan 21 19:07:07 crc kubenswrapper[5099]: I0121 19:07:07.897700 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerStarted","Data":"6e4f224ce02122d0d6b3746a13b7697ce2c146150428ddf51ebd34355c80219a"} Jan 21 19:07:08 crc kubenswrapper[5099]: I0121 19:07:08.912933 5099 generic.go:358] "Generic (PLEG): container finished" podID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerID="6e4f224ce02122d0d6b3746a13b7697ce2c146150428ddf51ebd34355c80219a" exitCode=0 Jan 21 19:07:08 crc kubenswrapper[5099]: I0121 19:07:08.913945 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerDied","Data":"6e4f224ce02122d0d6b3746a13b7697ce2c146150428ddf51ebd34355c80219a"} Jan 21 19:07:09 crc kubenswrapper[5099]: I0121 19:07:09.932646 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerStarted","Data":"91d412bc5b7d0bfed806ecf22663c54ea7b46fe1e577ffc60fc11ce56ea5cfab"} Jan 21 19:07:09 crc kubenswrapper[5099]: I0121 19:07:09.963841 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmv44" podStartSLOduration=4.1967632 podStartE2EDuration="4.963815958s" podCreationTimestamp="2026-01-21 19:07:05 +0000 UTC" firstStartedPulling="2026-01-21 19:07:06.874769908 +0000 UTC m=+3184.288732369" lastFinishedPulling="2026-01-21 19:07:07.641822626 +0000 UTC m=+3185.055785127" observedRunningTime="2026-01-21 19:07:09.956006588 +0000 UTC m=+3187.369969059" watchObservedRunningTime="2026-01-21 19:07:09.963815958 +0000 UTC m=+3187.377778419" Jan 21 19:07:10 crc kubenswrapper[5099]: I0121 19:07:10.914046 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:07:10 crc kubenswrapper[5099]: E0121 19:07:10.914401 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:07:15 crc kubenswrapper[5099]: I0121 19:07:15.946938 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:15 crc kubenswrapper[5099]: I0121 19:07:15.947585 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:16 crc kubenswrapper[5099]: I0121 19:07:16.006119 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:17 crc kubenswrapper[5099]: I0121 19:07:17.030938 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:17 crc kubenswrapper[5099]: I0121 19:07:17.083467 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:19 crc kubenswrapper[5099]: I0121 19:07:19.014856 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmv44" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="registry-server" containerID="cri-o://91d412bc5b7d0bfed806ecf22663c54ea7b46fe1e577ffc60fc11ce56ea5cfab" gracePeriod=2 Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.041468 5099 generic.go:358] "Generic (PLEG): container finished" podID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerID="91d412bc5b7d0bfed806ecf22663c54ea7b46fe1e577ffc60fc11ce56ea5cfab" exitCode=0 Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.041559 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerDied","Data":"91d412bc5b7d0bfed806ecf22663c54ea7b46fe1e577ffc60fc11ce56ea5cfab"} Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.465717 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.593888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities\") pod \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.594037 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnxkr\" (UniqueName: \"kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr\") pod \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.594130 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content\") pod \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\" (UID: \"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1\") " Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.596855 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities" (OuterVolumeSpecName: "utilities") pod "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" (UID: "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.603250 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr" (OuterVolumeSpecName: "kube-api-access-hnxkr") pod "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" (UID: "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1"). InnerVolumeSpecName "kube-api-access-hnxkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.696290 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.696328 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnxkr\" (UniqueName: \"kubernetes.io/projected/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-kube-api-access-hnxkr\") on node \"crc\" DevicePath \"\"" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.721897 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" (UID: "f21414c3-b33c-4eaf-badc-f3c6e52b7ab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:07:21 crc kubenswrapper[5099]: I0121 19:07:21.797638 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.063567 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmv44" event={"ID":"f21414c3-b33c-4eaf-badc-f3c6e52b7ab1","Type":"ContainerDied","Data":"49a4da1d580e6793a36e5ce97e4300fa1e6673300ee4f93f7025a41915b6dcad"} Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.064182 5099 scope.go:117] "RemoveContainer" containerID="91d412bc5b7d0bfed806ecf22663c54ea7b46fe1e577ffc60fc11ce56ea5cfab" Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.064251 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmv44" Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.105990 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.108616 5099 scope.go:117] "RemoveContainer" containerID="6e4f224ce02122d0d6b3746a13b7697ce2c146150428ddf51ebd34355c80219a" Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.114243 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmv44"] Jan 21 19:07:22 crc kubenswrapper[5099]: I0121 19:07:22.160414 5099 scope.go:117] "RemoveContainer" containerID="1c8bda6b003ef9dbc9d5694f9c34b519007c9dfe548155bdd7347a803ba62321" Jan 21 19:07:23 crc kubenswrapper[5099]: I0121 19:07:23.930976 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" path="/var/lib/kubelet/pods/f21414c3-b33c-4eaf-badc-f3c6e52b7ab1/volumes" Jan 21 19:07:24 crc kubenswrapper[5099]: I0121 19:07:24.914873 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:07:24 crc kubenswrapper[5099]: E0121 19:07:24.916723 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:07:36 crc kubenswrapper[5099]: I0121 19:07:36.914627 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:07:36 crc kubenswrapper[5099]: E0121 19:07:36.916585 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:07:51 crc kubenswrapper[5099]: I0121 19:07:51.913997 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:07:51 crc kubenswrapper[5099]: E0121 19:07:51.915269 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.149575 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483708-6v7w9"] Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151489 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="extract-content" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151544 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="extract-content" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151577 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="registry-server" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151588 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="registry-server" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151665 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="extract-utilities" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151677 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="extract-utilities" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.151917 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f21414c3-b33c-4eaf-badc-f3c6e52b7ab1" containerName="registry-server" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.157367 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.160713 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.161251 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.161427 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483708-6v7w9"] Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.161452 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.250005 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllwp\" (UniqueName: \"kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp\") pod \"auto-csr-approver-29483708-6v7w9\" (UID: \"484fcc86-6678-41da-84fe-640dae7e3798\") " pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.352080 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dllwp\" (UniqueName: \"kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp\") pod \"auto-csr-approver-29483708-6v7w9\" (UID: \"484fcc86-6678-41da-84fe-640dae7e3798\") " pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.380062 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dllwp\" (UniqueName: \"kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp\") pod \"auto-csr-approver-29483708-6v7w9\" (UID: \"484fcc86-6678-41da-84fe-640dae7e3798\") " pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.492667 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:00 crc kubenswrapper[5099]: I0121 19:08:00.777135 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483708-6v7w9"] Jan 21 19:08:01 crc kubenswrapper[5099]: I0121 19:08:01.468423 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" event={"ID":"484fcc86-6678-41da-84fe-640dae7e3798","Type":"ContainerStarted","Data":"d33c1df09eff4967bc57e6abf4461dd16581f9934ef9b50bac1f259891d6fe15"} Jan 21 19:08:02 crc kubenswrapper[5099]: I0121 19:08:02.478953 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" event={"ID":"484fcc86-6678-41da-84fe-640dae7e3798","Type":"ContainerStarted","Data":"6985428de2f27d84dabec09c6bde17d624c54cf155688ed946ea06971eb5cf69"} Jan 21 19:08:02 crc kubenswrapper[5099]: I0121 19:08:02.505470 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" podStartSLOduration=1.200365446 podStartE2EDuration="2.50544563s" podCreationTimestamp="2026-01-21 19:08:00 +0000 UTC" firstStartedPulling="2026-01-21 19:08:00.790364372 +0000 UTC m=+3238.204326833" lastFinishedPulling="2026-01-21 19:08:02.095444566 +0000 UTC m=+3239.509407017" observedRunningTime="2026-01-21 19:08:02.497772562 +0000 UTC m=+3239.911735043" watchObservedRunningTime="2026-01-21 19:08:02.50544563 +0000 UTC m=+3239.919408121" Jan 21 19:08:03 crc kubenswrapper[5099]: I0121 19:08:03.492392 5099 generic.go:358] "Generic (PLEG): container finished" podID="484fcc86-6678-41da-84fe-640dae7e3798" containerID="6985428de2f27d84dabec09c6bde17d624c54cf155688ed946ea06971eb5cf69" exitCode=0 Jan 21 19:08:03 crc kubenswrapper[5099]: I0121 19:08:03.492893 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" event={"ID":"484fcc86-6678-41da-84fe-640dae7e3798","Type":"ContainerDied","Data":"6985428de2f27d84dabec09c6bde17d624c54cf155688ed946ea06971eb5cf69"} Jan 21 19:08:04 crc kubenswrapper[5099]: I0121 19:08:04.766818 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:04 crc kubenswrapper[5099]: I0121 19:08:04.845491 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dllwp\" (UniqueName: \"kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp\") pod \"484fcc86-6678-41da-84fe-640dae7e3798\" (UID: \"484fcc86-6678-41da-84fe-640dae7e3798\") " Jan 21 19:08:04 crc kubenswrapper[5099]: I0121 19:08:04.852432 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp" (OuterVolumeSpecName: "kube-api-access-dllwp") pod "484fcc86-6678-41da-84fe-640dae7e3798" (UID: "484fcc86-6678-41da-84fe-640dae7e3798"). InnerVolumeSpecName "kube-api-access-dllwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:08:04 crc kubenswrapper[5099]: I0121 19:08:04.914171 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:08:04 crc kubenswrapper[5099]: E0121 19:08:04.914364 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:08:04 crc kubenswrapper[5099]: I0121 19:08:04.947352 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dllwp\" (UniqueName: \"kubernetes.io/projected/484fcc86-6678-41da-84fe-640dae7e3798-kube-api-access-dllwp\") on node \"crc\" DevicePath \"\"" Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.542018 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" event={"ID":"484fcc86-6678-41da-84fe-640dae7e3798","Type":"ContainerDied","Data":"d33c1df09eff4967bc57e6abf4461dd16581f9934ef9b50bac1f259891d6fe15"} Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.542379 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483708-6v7w9" Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.542400 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33c1df09eff4967bc57e6abf4461dd16581f9934ef9b50bac1f259891d6fe15" Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.602597 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483702-k45db"] Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.617172 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483702-k45db"] Jan 21 19:08:05 crc kubenswrapper[5099]: I0121 19:08:05.927391 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3449fc40-0754-49dc-9823-78990024e365" path="/var/lib/kubelet/pods/3449fc40-0754-49dc-9823-78990024e365/volumes" Jan 21 19:08:19 crc kubenswrapper[5099]: I0121 19:08:19.915587 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:08:19 crc kubenswrapper[5099]: E0121 19:08:19.916913 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:08:21 crc kubenswrapper[5099]: I0121 19:08:21.516830 5099 scope.go:117] "RemoveContainer" containerID="d302acce9f1d1e3815e34a8e84a347f91a0ac018ab8da3cb42bb75fb508c7ff5" Jan 21 19:08:30 crc kubenswrapper[5099]: I0121 19:08:30.915110 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:08:30 crc kubenswrapper[5099]: E0121 19:08:30.916603 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:08:42 crc kubenswrapper[5099]: I0121 19:08:42.914106 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:08:42 crc kubenswrapper[5099]: E0121 19:08:42.915225 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:08:56 crc kubenswrapper[5099]: I0121 19:08:56.913433 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:08:56 crc kubenswrapper[5099]: E0121 19:08:56.914180 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:09:05 crc kubenswrapper[5099]: I0121 19:09:05.434144 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:09:05 crc kubenswrapper[5099]: I0121 19:09:05.438442 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:09:05 crc kubenswrapper[5099]: I0121 19:09:05.447630 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:09:05 crc kubenswrapper[5099]: I0121 19:09:05.448835 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:09:10 crc kubenswrapper[5099]: I0121 19:09:10.914426 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:09:10 crc kubenswrapper[5099]: E0121 19:09:10.915381 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:09:21 crc kubenswrapper[5099]: I0121 19:09:21.914948 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:09:21 crc kubenswrapper[5099]: E0121 19:09:21.916047 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:09:34 crc kubenswrapper[5099]: I0121 19:09:34.914878 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:09:35 crc kubenswrapper[5099]: I0121 19:09:35.468666 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72"} Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.174825 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483710-p44zt"] Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.177471 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="484fcc86-6678-41da-84fe-640dae7e3798" containerName="oc" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.177507 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fcc86-6678-41da-84fe-640dae7e3798" containerName="oc" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.177762 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="484fcc86-6678-41da-84fe-640dae7e3798" containerName="oc" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.189126 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483710-p44zt"] Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.189334 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.192377 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.192863 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.193228 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.250614 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrsg\" (UniqueName: \"kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg\") pod \"auto-csr-approver-29483710-p44zt\" (UID: \"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb\") " pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.353032 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrsg\" (UniqueName: \"kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg\") pod \"auto-csr-approver-29483710-p44zt\" (UID: \"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb\") " pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.392830 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrsg\" (UniqueName: \"kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg\") pod \"auto-csr-approver-29483710-p44zt\" (UID: \"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb\") " pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.536059 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:00 crc kubenswrapper[5099]: I0121 19:10:00.775592 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483710-p44zt"] Jan 21 19:10:01 crc kubenswrapper[5099]: I0121 19:10:01.745904 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483710-p44zt" event={"ID":"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb","Type":"ContainerStarted","Data":"7132f0c75e486f25e6a505a4c1fa17dd11465852fbb37b92c36cf3af3c71954c"} Jan 21 19:10:02 crc kubenswrapper[5099]: I0121 19:10:02.757205 5099 generic.go:358] "Generic (PLEG): container finished" podID="1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" containerID="79589012348cf3f9b3af566ef69fef031d67785c1902bed587afc38ac2abad72" exitCode=0 Jan 21 19:10:02 crc kubenswrapper[5099]: I0121 19:10:02.757758 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483710-p44zt" event={"ID":"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb","Type":"ContainerDied","Data":"79589012348cf3f9b3af566ef69fef031d67785c1902bed587afc38ac2abad72"} Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.025151 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.130949 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrsg\" (UniqueName: \"kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg\") pod \"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb\" (UID: \"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb\") " Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.139049 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg" (OuterVolumeSpecName: "kube-api-access-jqrsg") pod "1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" (UID: "1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb"). InnerVolumeSpecName "kube-api-access-jqrsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.233367 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jqrsg\" (UniqueName: \"kubernetes.io/projected/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb-kube-api-access-jqrsg\") on node \"crc\" DevicePath \"\"" Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.780634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483710-p44zt" event={"ID":"1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb","Type":"ContainerDied","Data":"7132f0c75e486f25e6a505a4c1fa17dd11465852fbb37b92c36cf3af3c71954c"} Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.780713 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7132f0c75e486f25e6a505a4c1fa17dd11465852fbb37b92c36cf3af3c71954c" Jan 21 19:10:04 crc kubenswrapper[5099]: I0121 19:10:04.781679 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483710-p44zt" Jan 21 19:10:05 crc kubenswrapper[5099]: I0121 19:10:05.110831 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483704-lskp6"] Jan 21 19:10:05 crc kubenswrapper[5099]: I0121 19:10:05.118059 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483704-lskp6"] Jan 21 19:10:05 crc kubenswrapper[5099]: I0121 19:10:05.924178 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f8150b-1214-4fe4-861a-3b4a9b9bd987" path="/var/lib/kubelet/pods/15f8150b-1214-4fe4-861a-3b4a9b9bd987/volumes" Jan 21 19:10:21 crc kubenswrapper[5099]: I0121 19:10:21.706701 5099 scope.go:117] "RemoveContainer" containerID="8fb05498425b67f7971c29bc04ed755fa045ebc6ec05edfe05bd11895a6499a7" Jan 21 19:10:21 crc kubenswrapper[5099]: I0121 19:10:21.740289 5099 scope.go:117] "RemoveContainer" containerID="cd1b01b1608367a6c2c2c6d2bcc34646c61a1712317893cfc05dbedd13e393a4" Jan 21 19:10:21 crc kubenswrapper[5099]: I0121 19:10:21.871789 5099 scope.go:117] "RemoveContainer" containerID="ed12cc7f78886e5f0ed3e4093be4560685a52def346afb5b681fe28ce8c4d849" Jan 21 19:10:21 crc kubenswrapper[5099]: I0121 19:10:21.923126 5099 scope.go:117] "RemoveContainer" containerID="b55a37c50942c735154ba0d383d3c5843a0691aab1485a8247b9f694113a28fb" Jan 21 19:11:52 crc kubenswrapper[5099]: I0121 19:11:52.064639 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:11:52 crc kubenswrapper[5099]: I0121 19:11:52.067145 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.146797 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483712-jdxk5"] Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.148423 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" containerName="oc" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.148453 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" containerName="oc" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.148663 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" containerName="oc" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.160314 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.163668 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.167669 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.167911 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.172105 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483712-jdxk5"] Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.273389 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgkq\" (UniqueName: \"kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq\") pod \"auto-csr-approver-29483712-jdxk5\" (UID: \"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d\") " pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.376078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgkq\" (UniqueName: \"kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq\") pod \"auto-csr-approver-29483712-jdxk5\" (UID: \"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d\") " pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.405136 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgkq\" (UniqueName: \"kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq\") pod \"auto-csr-approver-29483712-jdxk5\" (UID: \"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d\") " pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.489087 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.739826 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483712-jdxk5"] Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.748826 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:12:00 crc kubenswrapper[5099]: I0121 19:12:00.975856 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" event={"ID":"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d","Type":"ContainerStarted","Data":"ab43b7a6e599d87eb95f9bd89c2fbaae550303d1a0840252d8cef06d9ad1481a"} Jan 21 19:12:02 crc kubenswrapper[5099]: I0121 19:12:02.998821 5099 generic.go:358] "Generic (PLEG): container finished" podID="1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" containerID="269f75b7657b9ef8e02147cfa1c57c809778b944f8af68a824132141c80ae300" exitCode=0 Jan 21 19:12:03 crc kubenswrapper[5099]: I0121 19:12:02.998911 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" event={"ID":"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d","Type":"ContainerDied","Data":"269f75b7657b9ef8e02147cfa1c57c809778b944f8af68a824132141c80ae300"} Jan 21 19:12:04 crc kubenswrapper[5099]: I0121 19:12:04.266032 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:04 crc kubenswrapper[5099]: I0121 19:12:04.344347 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vgkq\" (UniqueName: \"kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq\") pod \"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d\" (UID: \"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d\") " Jan 21 19:12:04 crc kubenswrapper[5099]: I0121 19:12:04.353034 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq" (OuterVolumeSpecName: "kube-api-access-2vgkq") pod "1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" (UID: "1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d"). InnerVolumeSpecName "kube-api-access-2vgkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:12:04 crc kubenswrapper[5099]: I0121 19:12:04.446195 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vgkq\" (UniqueName: \"kubernetes.io/projected/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d-kube-api-access-2vgkq\") on node \"crc\" DevicePath \"\"" Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.018598 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.018601 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483712-jdxk5" event={"ID":"1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d","Type":"ContainerDied","Data":"ab43b7a6e599d87eb95f9bd89c2fbaae550303d1a0840252d8cef06d9ad1481a"} Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.018752 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab43b7a6e599d87eb95f9bd89c2fbaae550303d1a0840252d8cef06d9ad1481a" Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.342495 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483706-fv5sl"] Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.348480 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483706-fv5sl"] Jan 21 19:12:05 crc kubenswrapper[5099]: I0121 19:12:05.928372 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3dc683e-1060-4326-bff6-e844746b43ac" path="/var/lib/kubelet/pods/b3dc683e-1060-4326-bff6-e844746b43ac/volumes" Jan 21 19:12:22 crc kubenswrapper[5099]: I0121 19:12:22.016707 5099 scope.go:117] "RemoveContainer" containerID="f860b53fe2f78c3ed8e1470bf778648f54bb240867464ff2630e9b29356d96bb" Jan 21 19:12:22 crc kubenswrapper[5099]: I0121 19:12:22.085034 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:12:22 crc kubenswrapper[5099]: I0121 19:12:22.085509 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.064945 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.065815 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.065938 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.067001 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.067085 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72" gracePeriod=600 Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.473985 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72" exitCode=0 Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.474065 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72"} Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.474780 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab"} Jan 21 19:12:52 crc kubenswrapper[5099]: I0121 19:12:52.474815 5099 scope.go:117] "RemoveContainer" containerID="ed8f65e4e865d742da54ffb07adeb8093cf746fe68795dee140a4dd38321c6df" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.143907 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483714-xmmxq"] Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.164263 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" containerName="oc" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.164762 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" containerName="oc" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.165388 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" containerName="oc" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.179274 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483714-xmmxq"] Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.179488 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.192318 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.192639 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.193141 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.337960 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrgv\" (UniqueName: \"kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv\") pod \"auto-csr-approver-29483714-xmmxq\" (UID: \"d262170d-5cf9-4e9a-b5b9-86a9573a63c6\") " pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.439973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khrgv\" (UniqueName: \"kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv\") pod \"auto-csr-approver-29483714-xmmxq\" (UID: \"d262170d-5cf9-4e9a-b5b9-86a9573a63c6\") " pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.464170 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrgv\" (UniqueName: \"kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv\") pod \"auto-csr-approver-29483714-xmmxq\" (UID: \"d262170d-5cf9-4e9a-b5b9-86a9573a63c6\") " pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.515183 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:00 crc kubenswrapper[5099]: I0121 19:14:00.952782 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483714-xmmxq"] Jan 21 19:14:01 crc kubenswrapper[5099]: I0121 19:14:01.204564 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" event={"ID":"d262170d-5cf9-4e9a-b5b9-86a9573a63c6","Type":"ContainerStarted","Data":"e110dcd0a4b18240f710c5c20cd0287d1a6331789ce3d2cd46e08a76f6598420"} Jan 21 19:14:02 crc kubenswrapper[5099]: I0121 19:14:02.223951 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" event={"ID":"d262170d-5cf9-4e9a-b5b9-86a9573a63c6","Type":"ContainerStarted","Data":"3d67d5f60761bf8b2624edc3340a23e40ce6dc28a9d54a9a141a8019a0ecd900"} Jan 21 19:14:02 crc kubenswrapper[5099]: I0121 19:14:02.250797 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" podStartSLOduration=1.37738585 podStartE2EDuration="2.2507691s" podCreationTimestamp="2026-01-21 19:14:00 +0000 UTC" firstStartedPulling="2026-01-21 19:14:00.960977418 +0000 UTC m=+3598.374939879" lastFinishedPulling="2026-01-21 19:14:01.834360648 +0000 UTC m=+3599.248323129" observedRunningTime="2026-01-21 19:14:02.243206745 +0000 UTC m=+3599.657169216" watchObservedRunningTime="2026-01-21 19:14:02.2507691 +0000 UTC m=+3599.664731561" Jan 21 19:14:03 crc kubenswrapper[5099]: I0121 19:14:03.232966 5099 generic.go:358] "Generic (PLEG): container finished" podID="d262170d-5cf9-4e9a-b5b9-86a9573a63c6" containerID="3d67d5f60761bf8b2624edc3340a23e40ce6dc28a9d54a9a141a8019a0ecd900" exitCode=0 Jan 21 19:14:03 crc kubenswrapper[5099]: I0121 19:14:03.233102 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" event={"ID":"d262170d-5cf9-4e9a-b5b9-86a9573a63c6","Type":"ContainerDied","Data":"3d67d5f60761bf8b2624edc3340a23e40ce6dc28a9d54a9a141a8019a0ecd900"} Jan 21 19:14:04 crc kubenswrapper[5099]: I0121 19:14:04.504806 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:04 crc kubenswrapper[5099]: I0121 19:14:04.524250 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khrgv\" (UniqueName: \"kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv\") pod \"d262170d-5cf9-4e9a-b5b9-86a9573a63c6\" (UID: \"d262170d-5cf9-4e9a-b5b9-86a9573a63c6\") " Jan 21 19:14:04 crc kubenswrapper[5099]: I0121 19:14:04.534004 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv" (OuterVolumeSpecName: "kube-api-access-khrgv") pod "d262170d-5cf9-4e9a-b5b9-86a9573a63c6" (UID: "d262170d-5cf9-4e9a-b5b9-86a9573a63c6"). InnerVolumeSpecName "kube-api-access-khrgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:14:04 crc kubenswrapper[5099]: I0121 19:14:04.625730 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-khrgv\" (UniqueName: \"kubernetes.io/projected/d262170d-5cf9-4e9a-b5b9-86a9573a63c6-kube-api-access-khrgv\") on node \"crc\" DevicePath \"\"" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.253816 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" event={"ID":"d262170d-5cf9-4e9a-b5b9-86a9573a63c6","Type":"ContainerDied","Data":"e110dcd0a4b18240f710c5c20cd0287d1a6331789ce3d2cd46e08a76f6598420"} Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.254380 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e110dcd0a4b18240f710c5c20cd0287d1a6331789ce3d2cd46e08a76f6598420" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.253879 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483714-xmmxq" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.321668 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483708-6v7w9"] Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.333926 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483708-6v7w9"] Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.547986 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.548195 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.555675 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.556216 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:14:05 crc kubenswrapper[5099]: I0121 19:14:05.924184 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484fcc86-6678-41da-84fe-640dae7e3798" path="/var/lib/kubelet/pods/484fcc86-6678-41da-84fe-640dae7e3798/volumes" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.423002 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.425041 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d262170d-5cf9-4e9a-b5b9-86a9573a63c6" containerName="oc" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.425067 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d262170d-5cf9-4e9a-b5b9-86a9573a63c6" containerName="oc" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.425284 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d262170d-5cf9-4e9a-b5b9-86a9573a63c6" containerName="oc" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.453724 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.453984 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.630839 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kx9d\" (UniqueName: \"kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.630914 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.631289 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.733122 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.733227 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kx9d\" (UniqueName: \"kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.733274 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.734057 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.734368 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.757316 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kx9d\" (UniqueName: \"kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d\") pod \"community-operators-bdpkz\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:20 crc kubenswrapper[5099]: I0121 19:14:20.778324 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:21 crc kubenswrapper[5099]: I0121 19:14:21.361595 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:21 crc kubenswrapper[5099]: I0121 19:14:21.418163 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerStarted","Data":"c973c46b92b2776e10bb10a6f5f9f453d7e090ce2208ad2f89ebd3a7c820300f"} Jan 21 19:14:22 crc kubenswrapper[5099]: I0121 19:14:22.190649 5099 scope.go:117] "RemoveContainer" containerID="6985428de2f27d84dabec09c6bde17d624c54cf155688ed946ea06971eb5cf69" Jan 21 19:14:22 crc kubenswrapper[5099]: I0121 19:14:22.431707 5099 generic.go:358] "Generic (PLEG): container finished" podID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerID="88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2" exitCode=0 Jan 21 19:14:22 crc kubenswrapper[5099]: I0121 19:14:22.431872 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerDied","Data":"88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2"} Jan 21 19:14:23 crc kubenswrapper[5099]: I0121 19:14:23.443722 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerStarted","Data":"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2"} Jan 21 19:14:24 crc kubenswrapper[5099]: I0121 19:14:24.456438 5099 generic.go:358] "Generic (PLEG): container finished" podID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerID="d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2" exitCode=0 Jan 21 19:14:24 crc kubenswrapper[5099]: I0121 19:14:24.457233 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerDied","Data":"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2"} Jan 21 19:14:25 crc kubenswrapper[5099]: I0121 19:14:25.469797 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerStarted","Data":"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069"} Jan 21 19:14:25 crc kubenswrapper[5099]: I0121 19:14:25.497817 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bdpkz" podStartSLOduration=4.7991825200000005 podStartE2EDuration="5.497724847s" podCreationTimestamp="2026-01-21 19:14:20 +0000 UTC" firstStartedPulling="2026-01-21 19:14:22.432999728 +0000 UTC m=+3619.846962199" lastFinishedPulling="2026-01-21 19:14:23.131542055 +0000 UTC m=+3620.545504526" observedRunningTime="2026-01-21 19:14:25.49453252 +0000 UTC m=+3622.908495001" watchObservedRunningTime="2026-01-21 19:14:25.497724847 +0000 UTC m=+3622.911687318" Jan 21 19:14:30 crc kubenswrapper[5099]: I0121 19:14:30.779203 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:30 crc kubenswrapper[5099]: I0121 19:14:30.780158 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:30 crc kubenswrapper[5099]: I0121 19:14:30.831626 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:31 crc kubenswrapper[5099]: I0121 19:14:31.587131 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:31 crc kubenswrapper[5099]: I0121 19:14:31.648327 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:33 crc kubenswrapper[5099]: I0121 19:14:33.552579 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bdpkz" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="registry-server" containerID="cri-o://c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069" gracePeriod=2 Jan 21 19:14:33 crc kubenswrapper[5099]: I0121 19:14:33.977771 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.101721 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities\") pod \"da8f2334-70b1-4ed2-8080-814f9f60059f\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.102043 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content\") pod \"da8f2334-70b1-4ed2-8080-814f9f60059f\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.102399 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kx9d\" (UniqueName: \"kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d\") pod \"da8f2334-70b1-4ed2-8080-814f9f60059f\" (UID: \"da8f2334-70b1-4ed2-8080-814f9f60059f\") " Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.103759 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities" (OuterVolumeSpecName: "utilities") pod "da8f2334-70b1-4ed2-8080-814f9f60059f" (UID: "da8f2334-70b1-4ed2-8080-814f9f60059f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.113707 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d" (OuterVolumeSpecName: "kube-api-access-5kx9d") pod "da8f2334-70b1-4ed2-8080-814f9f60059f" (UID: "da8f2334-70b1-4ed2-8080-814f9f60059f"). InnerVolumeSpecName "kube-api-access-5kx9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.160045 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8f2334-70b1-4ed2-8080-814f9f60059f" (UID: "da8f2334-70b1-4ed2-8080-814f9f60059f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.204366 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.204414 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f2334-70b1-4ed2-8080-814f9f60059f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.204427 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5kx9d\" (UniqueName: \"kubernetes.io/projected/da8f2334-70b1-4ed2-8080-814f9f60059f-kube-api-access-5kx9d\") on node \"crc\" DevicePath \"\"" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.564064 5099 generic.go:358] "Generic (PLEG): container finished" podID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerID="c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069" exitCode=0 Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.564174 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpkz" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.564234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerDied","Data":"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069"} Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.564296 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpkz" event={"ID":"da8f2334-70b1-4ed2-8080-814f9f60059f","Type":"ContainerDied","Data":"c973c46b92b2776e10bb10a6f5f9f453d7e090ce2208ad2f89ebd3a7c820300f"} Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.564322 5099 scope.go:117] "RemoveContainer" containerID="c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.598080 5099 scope.go:117] "RemoveContainer" containerID="d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.614458 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.633150 5099 scope.go:117] "RemoveContainer" containerID="88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.638364 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bdpkz"] Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.661521 5099 scope.go:117] "RemoveContainer" containerID="c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069" Jan 21 19:14:34 crc kubenswrapper[5099]: E0121 19:14:34.664368 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069\": container with ID starting with c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069 not found: ID does not exist" containerID="c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.664455 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069"} err="failed to get container status \"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069\": rpc error: code = NotFound desc = could not find container \"c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069\": container with ID starting with c22cd3868c23a6e0fc7cadfa3558008e210b7c0b33d3b414a895d5e6175cb069 not found: ID does not exist" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.664506 5099 scope.go:117] "RemoveContainer" containerID="d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2" Jan 21 19:14:34 crc kubenswrapper[5099]: E0121 19:14:34.665218 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2\": container with ID starting with d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2 not found: ID does not exist" containerID="d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.665360 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2"} err="failed to get container status \"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2\": rpc error: code = NotFound desc = could not find container \"d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2\": container with ID starting with d9f36945176ec53b7dcc22ae8095c21301a57bc932d70a994d94bb4920fcfad2 not found: ID does not exist" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.665535 5099 scope.go:117] "RemoveContainer" containerID="88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2" Jan 21 19:14:34 crc kubenswrapper[5099]: E0121 19:14:34.666140 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2\": container with ID starting with 88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2 not found: ID does not exist" containerID="88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2" Jan 21 19:14:34 crc kubenswrapper[5099]: I0121 19:14:34.666190 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2"} err="failed to get container status \"88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2\": rpc error: code = NotFound desc = could not find container \"88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2\": container with ID starting with 88b8199940cfee8a18b23ce86cb5ff958325bc2c6248d8b052a853c5b64673d2 not found: ID does not exist" Jan 21 19:14:35 crc kubenswrapper[5099]: I0121 19:14:35.923632 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" path="/var/lib/kubelet/pods/da8f2334-70b1-4ed2-8080-814f9f60059f/volumes" Jan 21 19:14:52 crc kubenswrapper[5099]: I0121 19:14:52.064968 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:14:52 crc kubenswrapper[5099]: I0121 19:14:52.066145 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.157430 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f"] Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161400 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="extract-content" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161518 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="extract-content" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161604 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="registry-server" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161666 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="registry-server" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161783 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="extract-utilities" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.161852 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="extract-utilities" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.162118 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="da8f2334-70b1-4ed2-8080-814f9f60059f" containerName="registry-server" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.169304 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.174098 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f"] Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.174363 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.175180 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.220427 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.220529 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.220595 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwtk\" (UniqueName: \"kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.322559 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.323106 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.323255 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwtk\" (UniqueName: \"kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.323764 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.332804 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.342764 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwtk\" (UniqueName: \"kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk\") pod \"collect-profiles-29483715-4hf2f\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.504233 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:00 crc kubenswrapper[5099]: I0121 19:15:00.963536 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f"] Jan 21 19:15:01 crc kubenswrapper[5099]: I0121 19:15:01.832953 5099 generic.go:358] "Generic (PLEG): container finished" podID="15953199-ab86-41f0-be35-37a0d23cd38b" containerID="76b7121a2818e42f672fdb0cd997c04027e59106116cea7dd581542b6e494a8c" exitCode=0 Jan 21 19:15:01 crc kubenswrapper[5099]: I0121 19:15:01.833850 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" event={"ID":"15953199-ab86-41f0-be35-37a0d23cd38b","Type":"ContainerDied","Data":"76b7121a2818e42f672fdb0cd997c04027e59106116cea7dd581542b6e494a8c"} Jan 21 19:15:01 crc kubenswrapper[5099]: I0121 19:15:01.833894 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" event={"ID":"15953199-ab86-41f0-be35-37a0d23cd38b","Type":"ContainerStarted","Data":"fc743f9e491e65360f91162946e8ec695b0a8d598492449bb210ef5b57b987a6"} Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.096295 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.172842 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume\") pod \"15953199-ab86-41f0-be35-37a0d23cd38b\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.172988 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gwtk\" (UniqueName: \"kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk\") pod \"15953199-ab86-41f0-be35-37a0d23cd38b\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.173058 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume\") pod \"15953199-ab86-41f0-be35-37a0d23cd38b\" (UID: \"15953199-ab86-41f0-be35-37a0d23cd38b\") " Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.173717 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume" (OuterVolumeSpecName: "config-volume") pod "15953199-ab86-41f0-be35-37a0d23cd38b" (UID: "15953199-ab86-41f0-be35-37a0d23cd38b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.180473 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk" (OuterVolumeSpecName: "kube-api-access-2gwtk") pod "15953199-ab86-41f0-be35-37a0d23cd38b" (UID: "15953199-ab86-41f0-be35-37a0d23cd38b"). InnerVolumeSpecName "kube-api-access-2gwtk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.188052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15953199-ab86-41f0-be35-37a0d23cd38b" (UID: "15953199-ab86-41f0-be35-37a0d23cd38b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.275173 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15953199-ab86-41f0-be35-37a0d23cd38b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.275237 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gwtk\" (UniqueName: \"kubernetes.io/projected/15953199-ab86-41f0-be35-37a0d23cd38b-kube-api-access-2gwtk\") on node \"crc\" DevicePath \"\"" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.275252 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15953199-ab86-41f0-be35-37a0d23cd38b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.853665 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.853684 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483715-4hf2f" event={"ID":"15953199-ab86-41f0-be35-37a0d23cd38b","Type":"ContainerDied","Data":"fc743f9e491e65360f91162946e8ec695b0a8d598492449bb210ef5b57b987a6"} Jan 21 19:15:03 crc kubenswrapper[5099]: I0121 19:15:03.853783 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc743f9e491e65360f91162946e8ec695b0a8d598492449bb210ef5b57b987a6" Jan 21 19:15:04 crc kubenswrapper[5099]: I0121 19:15:04.163225 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv"] Jan 21 19:15:04 crc kubenswrapper[5099]: I0121 19:15:04.168892 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483670-nvvfv"] Jan 21 19:15:05 crc kubenswrapper[5099]: I0121 19:15:05.927021 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4af3f3-c4b5-4dad-b8df-57771df1cab0" path="/var/lib/kubelet/pods/9c4af3f3-c4b5-4dad-b8df-57771df1cab0/volumes" Jan 21 19:15:22 crc kubenswrapper[5099]: I0121 19:15:22.065102 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:15:22 crc kubenswrapper[5099]: I0121 19:15:22.065976 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:15:22 crc kubenswrapper[5099]: I0121 19:15:22.370577 5099 scope.go:117] "RemoveContainer" containerID="c36363a9675953e73b5b1b57297647794be5a4cdc189c13612196c3191395ef9" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.065137 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.066445 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.066548 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.067527 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.067583 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" gracePeriod=600 Jan 21 19:15:52 crc kubenswrapper[5099]: E0121 19:15:52.206518 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.330379 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" exitCode=0 Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.330482 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab"} Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.330582 5099 scope.go:117] "RemoveContainer" containerID="81a0a8a971ec28427a2db41f48491f6e97e0f5fb3579db8bfc1a3a42d4581b72" Jan 21 19:15:52 crc kubenswrapper[5099]: I0121 19:15:52.331908 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:15:52 crc kubenswrapper[5099]: E0121 19:15:52.336699 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.147419 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483716-7x57q"] Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.150099 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15953199-ab86-41f0-be35-37a0d23cd38b" containerName="collect-profiles" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.150142 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="15953199-ab86-41f0-be35-37a0d23cd38b" containerName="collect-profiles" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.150476 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="15953199-ab86-41f0-be35-37a0d23cd38b" containerName="collect-profiles" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.163817 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483716-7x57q"] Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.164072 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.170403 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.170865 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.171101 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.314150 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz86j\" (UniqueName: \"kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j\") pod \"auto-csr-approver-29483716-7x57q\" (UID: \"40e8a714-529b-46a2-b839-df3fd3ac1bc6\") " pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.416428 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xz86j\" (UniqueName: \"kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j\") pod \"auto-csr-approver-29483716-7x57q\" (UID: \"40e8a714-529b-46a2-b839-df3fd3ac1bc6\") " pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.457466 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz86j\" (UniqueName: \"kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j\") pod \"auto-csr-approver-29483716-7x57q\" (UID: \"40e8a714-529b-46a2-b839-df3fd3ac1bc6\") " pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:00 crc kubenswrapper[5099]: I0121 19:16:00.539579 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:01 crc kubenswrapper[5099]: I0121 19:16:01.030267 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483716-7x57q"] Jan 21 19:16:01 crc kubenswrapper[5099]: I0121 19:16:01.423225 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483716-7x57q" event={"ID":"40e8a714-529b-46a2-b839-df3fd3ac1bc6","Type":"ContainerStarted","Data":"049f8ea0dd6225e84e6190b0e335ce72b3a82e16c5a78040c7c1dce294a3dfd4"} Jan 21 19:16:03 crc kubenswrapper[5099]: I0121 19:16:03.442582 5099 generic.go:358] "Generic (PLEG): container finished" podID="40e8a714-529b-46a2-b839-df3fd3ac1bc6" containerID="a5aff4d4eebf98ecafd7c73470ce4a5b88c7545f984d20fb074f458425b2aba2" exitCode=0 Jan 21 19:16:03 crc kubenswrapper[5099]: I0121 19:16:03.442793 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483716-7x57q" event={"ID":"40e8a714-529b-46a2-b839-df3fd3ac1bc6","Type":"ContainerDied","Data":"a5aff4d4eebf98ecafd7c73470ce4a5b88c7545f984d20fb074f458425b2aba2"} Jan 21 19:16:03 crc kubenswrapper[5099]: I0121 19:16:03.924773 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:16:03 crc kubenswrapper[5099]: E0121 19:16:03.925180 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:16:04 crc kubenswrapper[5099]: I0121 19:16:04.750188 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:04 crc kubenswrapper[5099]: I0121 19:16:04.818850 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz86j\" (UniqueName: \"kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j\") pod \"40e8a714-529b-46a2-b839-df3fd3ac1bc6\" (UID: \"40e8a714-529b-46a2-b839-df3fd3ac1bc6\") " Jan 21 19:16:04 crc kubenswrapper[5099]: I0121 19:16:04.830188 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j" (OuterVolumeSpecName: "kube-api-access-xz86j") pod "40e8a714-529b-46a2-b839-df3fd3ac1bc6" (UID: "40e8a714-529b-46a2-b839-df3fd3ac1bc6"). InnerVolumeSpecName "kube-api-access-xz86j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:16:04 crc kubenswrapper[5099]: I0121 19:16:04.920614 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xz86j\" (UniqueName: \"kubernetes.io/projected/40e8a714-529b-46a2-b839-df3fd3ac1bc6-kube-api-access-xz86j\") on node \"crc\" DevicePath \"\"" Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.468945 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483716-7x57q" event={"ID":"40e8a714-529b-46a2-b839-df3fd3ac1bc6","Type":"ContainerDied","Data":"049f8ea0dd6225e84e6190b0e335ce72b3a82e16c5a78040c7c1dce294a3dfd4"} Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.469027 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049f8ea0dd6225e84e6190b0e335ce72b3a82e16c5a78040c7c1dce294a3dfd4" Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.469201 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483716-7x57q" Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.852819 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483710-p44zt"] Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.863291 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483710-p44zt"] Jan 21 19:16:05 crc kubenswrapper[5099]: I0121 19:16:05.923794 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb" path="/var/lib/kubelet/pods/1a9c5f1e-0a9e-438a-9c43-1cbe2d31e3bb/volumes" Jan 21 19:16:14 crc kubenswrapper[5099]: I0121 19:16:14.914087 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:16:14 crc kubenswrapper[5099]: E0121 19:16:14.915407 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:16:22 crc kubenswrapper[5099]: I0121 19:16:22.456325 5099 scope.go:117] "RemoveContainer" containerID="79589012348cf3f9b3af566ef69fef031d67785c1902bed587afc38ac2abad72" Jan 21 19:16:28 crc kubenswrapper[5099]: I0121 19:16:28.913542 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:16:28 crc kubenswrapper[5099]: E0121 19:16:28.916509 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:16:39 crc kubenswrapper[5099]: I0121 19:16:39.914587 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:16:39 crc kubenswrapper[5099]: E0121 19:16:39.915924 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:16:53 crc kubenswrapper[5099]: I0121 19:16:53.920643 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:16:53 crc kubenswrapper[5099]: E0121 19:16:53.921641 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:17:07 crc kubenswrapper[5099]: I0121 19:17:07.914336 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:17:07 crc kubenswrapper[5099]: E0121 19:17:07.917377 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:17:21 crc kubenswrapper[5099]: I0121 19:17:21.914637 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:17:21 crc kubenswrapper[5099]: E0121 19:17:21.915782 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:17:35 crc kubenswrapper[5099]: I0121 19:17:35.917871 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:17:35 crc kubenswrapper[5099]: E0121 19:17:35.919169 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:17:48 crc kubenswrapper[5099]: I0121 19:17:48.914082 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:17:48 crc kubenswrapper[5099]: E0121 19:17:48.915101 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.915620 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.917407 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40e8a714-529b-46a2-b839-df3fd3ac1bc6" containerName="oc" Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.917430 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e8a714-529b-46a2-b839-df3fd3ac1bc6" containerName="oc" Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.917576 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="40e8a714-529b-46a2-b839-df3fd3ac1bc6" containerName="oc" Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.937521 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:57 crc kubenswrapper[5099]: I0121 19:17:57.948063 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.015012 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9br\" (UniqueName: \"kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.015073 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.015137 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.116708 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.116873 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9br\" (UniqueName: \"kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.116923 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.117504 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.117573 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.143167 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9br\" (UniqueName: \"kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br\") pod \"redhat-operators-5qn8q\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.260930 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.745478 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:17:58 crc kubenswrapper[5099]: I0121 19:17:58.769857 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:17:59 crc kubenswrapper[5099]: I0121 19:17:59.651446 5099 generic.go:358] "Generic (PLEG): container finished" podID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerID="31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09" exitCode=0 Jan 21 19:17:59 crc kubenswrapper[5099]: I0121 19:17:59.651570 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerDied","Data":"31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09"} Jan 21 19:17:59 crc kubenswrapper[5099]: I0121 19:17:59.651972 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerStarted","Data":"7ba1517a838b45cedc834dfe4693cc137069233829820000eb859be2dc8ba5eb"} Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.159138 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483718-5kl5b"] Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.166370 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.170098 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.170271 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.171648 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.188344 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483718-5kl5b"] Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.251751 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrpm\" (UniqueName: \"kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm\") pod \"auto-csr-approver-29483718-5kl5b\" (UID: \"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459\") " pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.354273 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrpm\" (UniqueName: \"kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm\") pod \"auto-csr-approver-29483718-5kl5b\" (UID: \"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459\") " pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.381290 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrpm\" (UniqueName: \"kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm\") pod \"auto-csr-approver-29483718-5kl5b\" (UID: \"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459\") " pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.500650 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.829036 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483718-5kl5b"] Jan 21 19:18:00 crc kubenswrapper[5099]: W0121 19:18:00.846056 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b61c0f2_5afb_43ac_8d2d_ba2d8ba68459.slice/crio-8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff WatchSource:0}: Error finding container 8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff: Status 404 returned error can't find the container with id 8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff Jan 21 19:18:00 crc kubenswrapper[5099]: I0121 19:18:00.914819 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:18:00 crc kubenswrapper[5099]: E0121 19:18:00.915251 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:18:01 crc kubenswrapper[5099]: I0121 19:18:01.677155 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" event={"ID":"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459","Type":"ContainerStarted","Data":"8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff"} Jan 21 19:18:01 crc kubenswrapper[5099]: I0121 19:18:01.681555 5099 generic.go:358] "Generic (PLEG): container finished" podID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerID="01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a" exitCode=0 Jan 21 19:18:01 crc kubenswrapper[5099]: I0121 19:18:01.681845 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerDied","Data":"01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a"} Jan 21 19:18:02 crc kubenswrapper[5099]: I0121 19:18:02.694511 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerStarted","Data":"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff"} Jan 21 19:18:02 crc kubenswrapper[5099]: I0121 19:18:02.699129 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" event={"ID":"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459","Type":"ContainerStarted","Data":"b1627d467fc8df3227c8fb0a4acd8fec30e06b5add0b2187911ca09a31a4e365"} Jan 21 19:18:02 crc kubenswrapper[5099]: I0121 19:18:02.730506 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qn8q" podStartSLOduration=4.689852781 podStartE2EDuration="5.730472906s" podCreationTimestamp="2026-01-21 19:17:57 +0000 UTC" firstStartedPulling="2026-01-21 19:17:59.653177343 +0000 UTC m=+3837.067139834" lastFinishedPulling="2026-01-21 19:18:00.693797488 +0000 UTC m=+3838.107759959" observedRunningTime="2026-01-21 19:18:02.722082846 +0000 UTC m=+3840.136045327" watchObservedRunningTime="2026-01-21 19:18:02.730472906 +0000 UTC m=+3840.144435387" Jan 21 19:18:02 crc kubenswrapper[5099]: I0121 19:18:02.740396 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" podStartSLOduration=1.410623489 podStartE2EDuration="2.740360812s" podCreationTimestamp="2026-01-21 19:18:00 +0000 UTC" firstStartedPulling="2026-01-21 19:18:00.852282749 +0000 UTC m=+3838.266245220" lastFinishedPulling="2026-01-21 19:18:02.182020082 +0000 UTC m=+3839.595982543" observedRunningTime="2026-01-21 19:18:02.738419886 +0000 UTC m=+3840.152382367" watchObservedRunningTime="2026-01-21 19:18:02.740360812 +0000 UTC m=+3840.154323303" Jan 21 19:18:03 crc kubenswrapper[5099]: I0121 19:18:03.708981 5099 generic.go:358] "Generic (PLEG): container finished" podID="3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" containerID="b1627d467fc8df3227c8fb0a4acd8fec30e06b5add0b2187911ca09a31a4e365" exitCode=0 Jan 21 19:18:03 crc kubenswrapper[5099]: I0121 19:18:03.710591 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" event={"ID":"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459","Type":"ContainerDied","Data":"b1627d467fc8df3227c8fb0a4acd8fec30e06b5add0b2187911ca09a31a4e365"} Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.026013 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.150353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrpm\" (UniqueName: \"kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm\") pod \"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459\" (UID: \"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459\") " Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.163189 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm" (OuterVolumeSpecName: "kube-api-access-dzrpm") pod "3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" (UID: "3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459"). InnerVolumeSpecName "kube-api-access-dzrpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.252790 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzrpm\" (UniqueName: \"kubernetes.io/projected/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459-kube-api-access-dzrpm\") on node \"crc\" DevicePath \"\"" Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.730580 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" event={"ID":"3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459","Type":"ContainerDied","Data":"8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff"} Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.730901 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8f528388ba390f1e8026b7ba5edf8b9a9087e17756294d5021677b59633cff" Jan 21 19:18:05 crc kubenswrapper[5099]: I0121 19:18:05.730663 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483718-5kl5b" Jan 21 19:18:06 crc kubenswrapper[5099]: I0121 19:18:06.115665 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483712-jdxk5"] Jan 21 19:18:06 crc kubenswrapper[5099]: I0121 19:18:06.125134 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483712-jdxk5"] Jan 21 19:18:07 crc kubenswrapper[5099]: I0121 19:18:07.922700 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d" path="/var/lib/kubelet/pods/1f11078d-06da-4c9b-8d6c-dd45cbb8ff5d/volumes" Jan 21 19:18:08 crc kubenswrapper[5099]: I0121 19:18:08.262859 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:08 crc kubenswrapper[5099]: I0121 19:18:08.262918 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:08 crc kubenswrapper[5099]: I0121 19:18:08.324637 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:08 crc kubenswrapper[5099]: I0121 19:18:08.822781 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:08 crc kubenswrapper[5099]: I0121 19:18:08.886985 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:18:10 crc kubenswrapper[5099]: I0121 19:18:10.782762 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qn8q" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="registry-server" containerID="cri-o://8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff" gracePeriod=2 Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.759895 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.795623 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9br\" (UniqueName: \"kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br\") pod \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.795705 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content\") pod \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.795774 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities\") pod \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\" (UID: \"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0\") " Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.798219 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities" (OuterVolumeSpecName: "utilities") pod "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" (UID: "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.812074 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br" (OuterVolumeSpecName: "kube-api-access-zn9br") pod "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" (UID: "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0"). InnerVolumeSpecName "kube-api-access-zn9br". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.813822 5099 generic.go:358] "Generic (PLEG): container finished" podID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerID="8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff" exitCode=0 Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.813907 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerDied","Data":"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff"} Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.813962 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qn8q" event={"ID":"ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0","Type":"ContainerDied","Data":"7ba1517a838b45cedc834dfe4693cc137069233829820000eb859be2dc8ba5eb"} Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.813984 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qn8q" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.813990 5099 scope.go:117] "RemoveContainer" containerID="8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.891966 5099 scope.go:117] "RemoveContainer" containerID="01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.897756 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zn9br\" (UniqueName: \"kubernetes.io/projected/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-kube-api-access-zn9br\") on node \"crc\" DevicePath \"\"" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.897811 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.922216 5099 scope.go:117] "RemoveContainer" containerID="31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.993133 5099 scope.go:117] "RemoveContainer" containerID="8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff" Jan 21 19:18:11 crc kubenswrapper[5099]: E0121 19:18:11.993640 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff\": container with ID starting with 8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff not found: ID does not exist" containerID="8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.993699 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff"} err="failed to get container status \"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff\": rpc error: code = NotFound desc = could not find container \"8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff\": container with ID starting with 8f087846a84ac7d0a50891637acf9517706fc66b726a5c3b46298cd810b949ff not found: ID does not exist" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.993781 5099 scope.go:117] "RemoveContainer" containerID="01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a" Jan 21 19:18:11 crc kubenswrapper[5099]: E0121 19:18:11.994566 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a\": container with ID starting with 01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a not found: ID does not exist" containerID="01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.994616 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a"} err="failed to get container status \"01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a\": rpc error: code = NotFound desc = could not find container \"01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a\": container with ID starting with 01328c9d13f0089ed5c1ffb5b0a988d728845c75c9926f2a6d9edc6bc429fe7a not found: ID does not exist" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.994652 5099 scope.go:117] "RemoveContainer" containerID="31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09" Jan 21 19:18:11 crc kubenswrapper[5099]: E0121 19:18:11.995071 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09\": container with ID starting with 31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09 not found: ID does not exist" containerID="31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09" Jan 21 19:18:11 crc kubenswrapper[5099]: I0121 19:18:11.995100 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09"} err="failed to get container status \"31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09\": rpc error: code = NotFound desc = could not find container \"31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09\": container with ID starting with 31ff52c65d2c9762eb162756f129365be69182274979fe06edc6de21e8e59d09 not found: ID does not exist" Jan 21 19:18:12 crc kubenswrapper[5099]: I0121 19:18:12.811080 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" (UID: "ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:18:12 crc kubenswrapper[5099]: I0121 19:18:12.812565 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:18:13 crc kubenswrapper[5099]: I0121 19:18:13.053989 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:18:13 crc kubenswrapper[5099]: I0121 19:18:13.062688 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qn8q"] Jan 21 19:18:13 crc kubenswrapper[5099]: I0121 19:18:13.923847 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" path="/var/lib/kubelet/pods/ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0/volumes" Jan 21 19:18:14 crc kubenswrapper[5099]: I0121 19:18:14.914271 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:18:14 crc kubenswrapper[5099]: E0121 19:18:14.915287 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:18:22 crc kubenswrapper[5099]: I0121 19:18:22.633250 5099 scope.go:117] "RemoveContainer" containerID="269f75b7657b9ef8e02147cfa1c57c809778b944f8af68a824132141c80ae300" Jan 21 19:18:29 crc kubenswrapper[5099]: I0121 19:18:29.914351 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:18:29 crc kubenswrapper[5099]: E0121 19:18:29.915696 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:18:42 crc kubenswrapper[5099]: I0121 19:18:42.914885 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:18:42 crc kubenswrapper[5099]: E0121 19:18:42.915793 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:18:56 crc kubenswrapper[5099]: I0121 19:18:56.914726 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:18:56 crc kubenswrapper[5099]: E0121 19:18:56.915821 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.251866 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.254886 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="extract-content" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.254950 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="extract-content" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.254986 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="extract-utilities" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255004 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="extract-utilities" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255090 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" containerName="oc" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255110 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" containerName="oc" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255151 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="registry-server" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255167 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="registry-server" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255520 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" containerName="oc" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.255570 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce58c6d6-bf56-4ddd-a2b0-eb0e75f70ae0" containerName="registry-server" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.264339 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.265675 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.369247 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.369662 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthbg\" (UniqueName: \"kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.369976 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.471679 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.472098 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jthbg\" (UniqueName: \"kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.472237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.472259 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.472542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.500850 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jthbg\" (UniqueName: \"kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg\") pod \"certified-operators-n9dg7\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:03 crc kubenswrapper[5099]: I0121 19:19:03.601508 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:04 crc kubenswrapper[5099]: I0121 19:19:04.077275 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:04 crc kubenswrapper[5099]: I0121 19:19:04.388777 5099 generic.go:358] "Generic (PLEG): container finished" podID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerID="ffb7bd5dfa3757d8fe88c215a771b2f0ecb4add3d1bd85d18266b23860faeb3f" exitCode=0 Jan 21 19:19:04 crc kubenswrapper[5099]: I0121 19:19:04.389322 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerDied","Data":"ffb7bd5dfa3757d8fe88c215a771b2f0ecb4add3d1bd85d18266b23860faeb3f"} Jan 21 19:19:04 crc kubenswrapper[5099]: I0121 19:19:04.389360 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerStarted","Data":"b903b1afaa4da1f264de120f1f827ce41d330e61be9ef3492649e896424b303f"} Jan 21 19:19:05 crc kubenswrapper[5099]: I0121 19:19:05.399309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerStarted","Data":"449347fbfbd2eb9765edceed8cf659bd310244c5fd260ece73cd2a1599fe56c0"} Jan 21 19:19:05 crc kubenswrapper[5099]: I0121 19:19:05.707354 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:19:05 crc kubenswrapper[5099]: I0121 19:19:05.709422 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:19:05 crc kubenswrapper[5099]: I0121 19:19:05.717291 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:19:05 crc kubenswrapper[5099]: I0121 19:19:05.718638 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:19:06 crc kubenswrapper[5099]: I0121 19:19:06.411843 5099 generic.go:358] "Generic (PLEG): container finished" podID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerID="449347fbfbd2eb9765edceed8cf659bd310244c5fd260ece73cd2a1599fe56c0" exitCode=0 Jan 21 19:19:06 crc kubenswrapper[5099]: I0121 19:19:06.411997 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerDied","Data":"449347fbfbd2eb9765edceed8cf659bd310244c5fd260ece73cd2a1599fe56c0"} Jan 21 19:19:07 crc kubenswrapper[5099]: I0121 19:19:07.431046 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerStarted","Data":"7a9b9b20605e0d83d0ea4158aee0fd4dfbb42d0204b146c16a6292193488d7f0"} Jan 21 19:19:07 crc kubenswrapper[5099]: I0121 19:19:07.458241 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n9dg7" podStartSLOduration=3.7261887590000002 podStartE2EDuration="4.458217572s" podCreationTimestamp="2026-01-21 19:19:03 +0000 UTC" firstStartedPulling="2026-01-21 19:19:04.390388096 +0000 UTC m=+3901.804350567" lastFinishedPulling="2026-01-21 19:19:05.122416919 +0000 UTC m=+3902.536379380" observedRunningTime="2026-01-21 19:19:07.44928241 +0000 UTC m=+3904.863244931" watchObservedRunningTime="2026-01-21 19:19:07.458217572 +0000 UTC m=+3904.872180043" Jan 21 19:19:10 crc kubenswrapper[5099]: I0121 19:19:10.914072 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:19:10 crc kubenswrapper[5099]: E0121 19:19:10.915405 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:19:13 crc kubenswrapper[5099]: I0121 19:19:13.602285 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:13 crc kubenswrapper[5099]: I0121 19:19:13.602659 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:13 crc kubenswrapper[5099]: I0121 19:19:13.653087 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:14 crc kubenswrapper[5099]: I0121 19:19:14.586281 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:14 crc kubenswrapper[5099]: I0121 19:19:14.649775 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:16 crc kubenswrapper[5099]: I0121 19:19:16.534320 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n9dg7" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="registry-server" containerID="cri-o://7a9b9b20605e0d83d0ea4158aee0fd4dfbb42d0204b146c16a6292193488d7f0" gracePeriod=2 Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.547175 5099 generic.go:358] "Generic (PLEG): container finished" podID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerID="7a9b9b20605e0d83d0ea4158aee0fd4dfbb42d0204b146c16a6292193488d7f0" exitCode=0 Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.547298 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerDied","Data":"7a9b9b20605e0d83d0ea4158aee0fd4dfbb42d0204b146c16a6292193488d7f0"} Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.548418 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9dg7" event={"ID":"48c47ebd-d552-4b3c-a2a5-d19c010855f4","Type":"ContainerDied","Data":"b903b1afaa4da1f264de120f1f827ce41d330e61be9ef3492649e896424b303f"} Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.548520 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b903b1afaa4da1f264de120f1f827ce41d330e61be9ef3492649e896424b303f" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.583984 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.652274 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content\") pod \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.655538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jthbg\" (UniqueName: \"kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg\") pod \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.656102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities\") pod \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\" (UID: \"48c47ebd-d552-4b3c-a2a5-d19c010855f4\") " Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.657561 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities" (OuterVolumeSpecName: "utilities") pod "48c47ebd-d552-4b3c-a2a5-d19c010855f4" (UID: "48c47ebd-d552-4b3c-a2a5-d19c010855f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.658146 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.668944 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg" (OuterVolumeSpecName: "kube-api-access-jthbg") pod "48c47ebd-d552-4b3c-a2a5-d19c010855f4" (UID: "48c47ebd-d552-4b3c-a2a5-d19c010855f4"). InnerVolumeSpecName "kube-api-access-jthbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.700356 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48c47ebd-d552-4b3c-a2a5-d19c010855f4" (UID: "48c47ebd-d552-4b3c-a2a5-d19c010855f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.761588 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jthbg\" (UniqueName: \"kubernetes.io/projected/48c47ebd-d552-4b3c-a2a5-d19c010855f4-kube-api-access-jthbg\") on node \"crc\" DevicePath \"\"" Jan 21 19:19:17 crc kubenswrapper[5099]: I0121 19:19:17.761646 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c47ebd-d552-4b3c-a2a5-d19c010855f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:19:18 crc kubenswrapper[5099]: I0121 19:19:18.562924 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9dg7" Jan 21 19:19:18 crc kubenswrapper[5099]: I0121 19:19:18.592873 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:18 crc kubenswrapper[5099]: I0121 19:19:18.598587 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n9dg7"] Jan 21 19:19:19 crc kubenswrapper[5099]: I0121 19:19:19.931372 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" path="/var/lib/kubelet/pods/48c47ebd-d552-4b3c-a2a5-d19c010855f4/volumes" Jan 21 19:19:24 crc kubenswrapper[5099]: I0121 19:19:24.915681 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:19:24 crc kubenswrapper[5099]: E0121 19:19:24.919909 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:19:39 crc kubenswrapper[5099]: I0121 19:19:39.914234 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:19:39 crc kubenswrapper[5099]: E0121 19:19:39.915226 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:19:54 crc kubenswrapper[5099]: I0121 19:19:54.914840 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:19:54 crc kubenswrapper[5099]: E0121 19:19:54.916153 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.160640 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483720-zmvtr"] Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168259 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="extract-utilities" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168345 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="extract-utilities" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168380 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="extract-content" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168393 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="extract-content" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168433 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="registry-server" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168444 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="registry-server" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.168950 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="48c47ebd-d552-4b3c-a2a5-d19c010855f4" containerName="registry-server" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.193076 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483720-zmvtr"] Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.193292 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.198132 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.198397 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.198680 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.242633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpgn\" (UniqueName: \"kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn\") pod \"auto-csr-approver-29483720-zmvtr\" (UID: \"751dc716-b8a6-45b9-9c5e-0382252507ec\") " pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.344960 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpgn\" (UniqueName: \"kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn\") pod \"auto-csr-approver-29483720-zmvtr\" (UID: \"751dc716-b8a6-45b9-9c5e-0382252507ec\") " pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.407066 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpgn\" (UniqueName: \"kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn\") pod \"auto-csr-approver-29483720-zmvtr\" (UID: \"751dc716-b8a6-45b9-9c5e-0382252507ec\") " pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:00 crc kubenswrapper[5099]: I0121 19:20:00.524637 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:01 crc kubenswrapper[5099]: I0121 19:20:01.058649 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483720-zmvtr"] Jan 21 19:20:02 crc kubenswrapper[5099]: I0121 19:20:02.029314 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" event={"ID":"751dc716-b8a6-45b9-9c5e-0382252507ec","Type":"ContainerStarted","Data":"251ce657a602e7b8e8fdfd18e3577bab2e6716606f77f571283ec1f3db6cb5d9"} Jan 21 19:20:03 crc kubenswrapper[5099]: I0121 19:20:03.043894 5099 generic.go:358] "Generic (PLEG): container finished" podID="751dc716-b8a6-45b9-9c5e-0382252507ec" containerID="35c27a5fe3d17a3b17e4f43b89ed9649539673a274f20d507d67ee963b93dadd" exitCode=0 Jan 21 19:20:03 crc kubenswrapper[5099]: I0121 19:20:03.044114 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" event={"ID":"751dc716-b8a6-45b9-9c5e-0382252507ec","Type":"ContainerDied","Data":"35c27a5fe3d17a3b17e4f43b89ed9649539673a274f20d507d67ee963b93dadd"} Jan 21 19:20:04 crc kubenswrapper[5099]: I0121 19:20:04.379115 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:04 crc kubenswrapper[5099]: I0121 19:20:04.538323 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxpgn\" (UniqueName: \"kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn\") pod \"751dc716-b8a6-45b9-9c5e-0382252507ec\" (UID: \"751dc716-b8a6-45b9-9c5e-0382252507ec\") " Jan 21 19:20:04 crc kubenswrapper[5099]: I0121 19:20:04.548002 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn" (OuterVolumeSpecName: "kube-api-access-sxpgn") pod "751dc716-b8a6-45b9-9c5e-0382252507ec" (UID: "751dc716-b8a6-45b9-9c5e-0382252507ec"). InnerVolumeSpecName "kube-api-access-sxpgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:20:04 crc kubenswrapper[5099]: I0121 19:20:04.642011 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxpgn\" (UniqueName: \"kubernetes.io/projected/751dc716-b8a6-45b9-9c5e-0382252507ec-kube-api-access-sxpgn\") on node \"crc\" DevicePath \"\"" Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.072438 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.072485 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483720-zmvtr" event={"ID":"751dc716-b8a6-45b9-9c5e-0382252507ec","Type":"ContainerDied","Data":"251ce657a602e7b8e8fdfd18e3577bab2e6716606f77f571283ec1f3db6cb5d9"} Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.072571 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251ce657a602e7b8e8fdfd18e3577bab2e6716606f77f571283ec1f3db6cb5d9" Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.475679 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483714-xmmxq"] Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.487359 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483714-xmmxq"] Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.914470 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:20:05 crc kubenswrapper[5099]: E0121 19:20:05.914974 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:20:05 crc kubenswrapper[5099]: I0121 19:20:05.930384 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d262170d-5cf9-4e9a-b5b9-86a9573a63c6" path="/var/lib/kubelet/pods/d262170d-5cf9-4e9a-b5b9-86a9573a63c6/volumes" Jan 21 19:20:16 crc kubenswrapper[5099]: I0121 19:20:16.915677 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:20:16 crc kubenswrapper[5099]: E0121 19:20:16.916710 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:20:22 crc kubenswrapper[5099]: I0121 19:20:22.860430 5099 scope.go:117] "RemoveContainer" containerID="3d67d5f60761bf8b2624edc3340a23e40ce6dc28a9d54a9a141a8019a0ecd900" Jan 21 19:20:30 crc kubenswrapper[5099]: I0121 19:20:30.914022 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:20:30 crc kubenswrapper[5099]: E0121 19:20:30.914976 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:20:41 crc kubenswrapper[5099]: I0121 19:20:41.914446 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:20:41 crc kubenswrapper[5099]: E0121 19:20:41.915697 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:20:54 crc kubenswrapper[5099]: I0121 19:20:54.916100 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:20:55 crc kubenswrapper[5099]: I0121 19:20:55.732222 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6"} Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.172951 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483722-qgqh8"] Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.174815 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="751dc716-b8a6-45b9-9c5e-0382252507ec" containerName="oc" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.174838 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="751dc716-b8a6-45b9-9c5e-0382252507ec" containerName="oc" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.175140 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="751dc716-b8a6-45b9-9c5e-0382252507ec" containerName="oc" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.185539 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.185529 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483722-qgqh8"] Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.218590 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.219053 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.218978 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.279452 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spcx\" (UniqueName: \"kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx\") pod \"auto-csr-approver-29483722-qgqh8\" (UID: \"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1\") " pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.382159 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7spcx\" (UniqueName: \"kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx\") pod \"auto-csr-approver-29483722-qgqh8\" (UID: \"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1\") " pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.434654 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spcx\" (UniqueName: \"kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx\") pod \"auto-csr-approver-29483722-qgqh8\" (UID: \"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1\") " pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:00 crc kubenswrapper[5099]: I0121 19:22:00.530963 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:01 crc kubenswrapper[5099]: I0121 19:22:01.004392 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483722-qgqh8"] Jan 21 19:22:01 crc kubenswrapper[5099]: I0121 19:22:01.430775 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" event={"ID":"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1","Type":"ContainerStarted","Data":"7fc1526a7cdce26e875f172c8074589c861c0beb74044e56c14b36ac823014b2"} Jan 21 19:22:02 crc kubenswrapper[5099]: I0121 19:22:02.441252 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" event={"ID":"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1","Type":"ContainerStarted","Data":"5bc3d862316f0b929f48a3ca34b555c728a97a2e93c32999576899da5c67c36d"} Jan 21 19:22:02 crc kubenswrapper[5099]: I0121 19:22:02.462824 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" podStartSLOduration=1.482415287 podStartE2EDuration="2.462797675s" podCreationTimestamp="2026-01-21 19:22:00 +0000 UTC" firstStartedPulling="2026-01-21 19:22:01.023035098 +0000 UTC m=+4078.436997589" lastFinishedPulling="2026-01-21 19:22:02.003417516 +0000 UTC m=+4079.417379977" observedRunningTime="2026-01-21 19:22:02.454462526 +0000 UTC m=+4079.868424987" watchObservedRunningTime="2026-01-21 19:22:02.462797675 +0000 UTC m=+4079.876760126" Jan 21 19:22:03 crc kubenswrapper[5099]: I0121 19:22:03.453907 5099 generic.go:358] "Generic (PLEG): container finished" podID="ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" containerID="5bc3d862316f0b929f48a3ca34b555c728a97a2e93c32999576899da5c67c36d" exitCode=0 Jan 21 19:22:03 crc kubenswrapper[5099]: I0121 19:22:03.454046 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" event={"ID":"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1","Type":"ContainerDied","Data":"5bc3d862316f0b929f48a3ca34b555c728a97a2e93c32999576899da5c67c36d"} Jan 21 19:22:04 crc kubenswrapper[5099]: I0121 19:22:04.885165 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:04 crc kubenswrapper[5099]: I0121 19:22:04.985239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spcx\" (UniqueName: \"kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx\") pod \"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1\" (UID: \"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1\") " Jan 21 19:22:04 crc kubenswrapper[5099]: I0121 19:22:04.995150 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx" (OuterVolumeSpecName: "kube-api-access-7spcx") pod "ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" (UID: "ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1"). InnerVolumeSpecName "kube-api-access-7spcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.087632 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7spcx\" (UniqueName: \"kubernetes.io/projected/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1-kube-api-access-7spcx\") on node \"crc\" DevicePath \"\"" Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.503785 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" event={"ID":"ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1","Type":"ContainerDied","Data":"7fc1526a7cdce26e875f172c8074589c861c0beb74044e56c14b36ac823014b2"} Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.503858 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc1526a7cdce26e875f172c8074589c861c0beb74044e56c14b36ac823014b2" Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.503990 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483722-qgqh8" Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.561684 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483716-7x57q"] Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.574122 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483716-7x57q"] Jan 21 19:22:05 crc kubenswrapper[5099]: I0121 19:22:05.932055 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e8a714-529b-46a2-b839-df3fd3ac1bc6" path="/var/lib/kubelet/pods/40e8a714-529b-46a2-b839-df3fd3ac1bc6/volumes" Jan 21 19:22:23 crc kubenswrapper[5099]: I0121 19:22:23.051838 5099 scope.go:117] "RemoveContainer" containerID="a5aff4d4eebf98ecafd7c73470ce4a5b88c7545f984d20fb074f458425b2aba2" Jan 21 19:23:22 crc kubenswrapper[5099]: I0121 19:23:22.065308 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:23:22 crc kubenswrapper[5099]: I0121 19:23:22.066044 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:23:52 crc kubenswrapper[5099]: I0121 19:23:52.064817 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:23:52 crc kubenswrapper[5099]: I0121 19:23:52.065599 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.164102 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483724-txczr"] Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.166606 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" containerName="oc" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.166704 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" containerName="oc" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.167199 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" containerName="oc" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.181097 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483724-txczr"] Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.181321 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.190037 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.190220 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.190526 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.256067 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8tmm\" (UniqueName: \"kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm\") pod \"auto-csr-approver-29483724-txczr\" (UID: \"47104d4a-0d52-4671-a37b-92d039c4b9c9\") " pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.357387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8tmm\" (UniqueName: \"kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm\") pod \"auto-csr-approver-29483724-txczr\" (UID: \"47104d4a-0d52-4671-a37b-92d039c4b9c9\") " pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.390032 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8tmm\" (UniqueName: \"kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm\") pod \"auto-csr-approver-29483724-txczr\" (UID: \"47104d4a-0d52-4671-a37b-92d039c4b9c9\") " pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.520315 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.814942 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483724-txczr"] Jan 21 19:24:00 crc kubenswrapper[5099]: I0121 19:24:00.843327 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:24:01 crc kubenswrapper[5099]: I0121 19:24:01.769816 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483724-txczr" event={"ID":"47104d4a-0d52-4671-a37b-92d039c4b9c9","Type":"ContainerStarted","Data":"73c6087d97402a4767dbf36a0843b411ae20a748b5e3423c40aae1f16b566bef"} Jan 21 19:24:02 crc kubenswrapper[5099]: I0121 19:24:02.781971 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483724-txczr" event={"ID":"47104d4a-0d52-4671-a37b-92d039c4b9c9","Type":"ContainerDied","Data":"dfe0b03c6e25fae6d866ef7e22a882af9b138ca86d4b23d6b55675318ff1fc0e"} Jan 21 19:24:02 crc kubenswrapper[5099]: I0121 19:24:02.781816 5099 generic.go:358] "Generic (PLEG): container finished" podID="47104d4a-0d52-4671-a37b-92d039c4b9c9" containerID="dfe0b03c6e25fae6d866ef7e22a882af9b138ca86d4b23d6b55675318ff1fc0e" exitCode=0 Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.044259 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.145435 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8tmm\" (UniqueName: \"kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm\") pod \"47104d4a-0d52-4671-a37b-92d039c4b9c9\" (UID: \"47104d4a-0d52-4671-a37b-92d039c4b9c9\") " Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.157512 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm" (OuterVolumeSpecName: "kube-api-access-l8tmm") pod "47104d4a-0d52-4671-a37b-92d039c4b9c9" (UID: "47104d4a-0d52-4671-a37b-92d039c4b9c9"). InnerVolumeSpecName "kube-api-access-l8tmm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.247457 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8tmm\" (UniqueName: \"kubernetes.io/projected/47104d4a-0d52-4671-a37b-92d039c4b9c9-kube-api-access-l8tmm\") on node \"crc\" DevicePath \"\"" Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.806390 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483724-txczr" Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.806470 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483724-txczr" event={"ID":"47104d4a-0d52-4671-a37b-92d039c4b9c9","Type":"ContainerDied","Data":"73c6087d97402a4767dbf36a0843b411ae20a748b5e3423c40aae1f16b566bef"} Jan 21 19:24:04 crc kubenswrapper[5099]: I0121 19:24:04.806559 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c6087d97402a4767dbf36a0843b411ae20a748b5e3423c40aae1f16b566bef" Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.146351 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483718-5kl5b"] Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.157226 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483718-5kl5b"] Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.862847 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.863809 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.872971 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.873433 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:24:05 crc kubenswrapper[5099]: I0121 19:24:05.923947 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459" path="/var/lib/kubelet/pods/3b61c0f2-5afb-43ac-8d2d-ba2d8ba68459/volumes" Jan 21 19:24:22 crc kubenswrapper[5099]: I0121 19:24:22.064577 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:24:22 crc kubenswrapper[5099]: I0121 19:24:22.065310 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:24:22 crc kubenswrapper[5099]: I0121 19:24:22.065468 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:24:22 crc kubenswrapper[5099]: I0121 19:24:22.066273 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:24:22 crc kubenswrapper[5099]: I0121 19:24:22.066339 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6" gracePeriod=600 Jan 21 19:24:23 crc kubenswrapper[5099]: I0121 19:24:23.063374 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6" exitCode=0 Jan 21 19:24:23 crc kubenswrapper[5099]: I0121 19:24:23.063506 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6"} Jan 21 19:24:23 crc kubenswrapper[5099]: I0121 19:24:23.064086 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerStarted","Data":"dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304"} Jan 21 19:24:23 crc kubenswrapper[5099]: I0121 19:24:23.064122 5099 scope.go:117] "RemoveContainer" containerID="902eafa9de1bc3af59cd4e82be49a3162545844e170420f7a5568150cb46f6ab" Jan 21 19:24:23 crc kubenswrapper[5099]: I0121 19:24:23.253906 5099 scope.go:117] "RemoveContainer" containerID="b1627d467fc8df3227c8fb0a4acd8fec30e06b5add0b2187911ca09a31a4e365" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.520202 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.527831 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47104d4a-0d52-4671-a37b-92d039c4b9c9" containerName="oc" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.527914 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="47104d4a-0d52-4671-a37b-92d039c4b9c9" containerName="oc" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.528520 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="47104d4a-0d52-4671-a37b-92d039c4b9c9" containerName="oc" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.542809 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.543170 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.687492 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.687612 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsz6\" (UniqueName: \"kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.687898 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.789770 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.789904 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.789999 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsz6\" (UniqueName: \"kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.790529 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.790886 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.827479 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsz6\" (UniqueName: \"kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6\") pod \"community-operators-vmmmg\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:17 crc kubenswrapper[5099]: I0121 19:25:17.875145 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:18 crc kubenswrapper[5099]: I0121 19:25:18.160655 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:18 crc kubenswrapper[5099]: I0121 19:25:18.655382 5099 generic.go:358] "Generic (PLEG): container finished" podID="d9150eef-4dd6-427e-8751-9f54821157e0" containerID="a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d" exitCode=0 Jan 21 19:25:18 crc kubenswrapper[5099]: I0121 19:25:18.655545 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerDied","Data":"a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d"} Jan 21 19:25:18 crc kubenswrapper[5099]: I0121 19:25:18.655589 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerStarted","Data":"279560e2baa5ce20efa6a9d5c4eed4f6359c3a0f3aec083bc143521c496e1d4c"} Jan 21 19:25:19 crc kubenswrapper[5099]: I0121 19:25:19.667461 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerStarted","Data":"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1"} Jan 21 19:25:20 crc kubenswrapper[5099]: I0121 19:25:20.682625 5099 generic.go:358] "Generic (PLEG): container finished" podID="d9150eef-4dd6-427e-8751-9f54821157e0" containerID="3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1" exitCode=0 Jan 21 19:25:20 crc kubenswrapper[5099]: I0121 19:25:20.682787 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerDied","Data":"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1"} Jan 21 19:25:21 crc kubenswrapper[5099]: I0121 19:25:21.694027 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerStarted","Data":"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67"} Jan 21 19:25:21 crc kubenswrapper[5099]: I0121 19:25:21.720439 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmmmg" podStartSLOduration=3.99088062 podStartE2EDuration="4.720419584s" podCreationTimestamp="2026-01-21 19:25:17 +0000 UTC" firstStartedPulling="2026-01-21 19:25:18.656868749 +0000 UTC m=+4276.070831240" lastFinishedPulling="2026-01-21 19:25:19.386407713 +0000 UTC m=+4276.800370204" observedRunningTime="2026-01-21 19:25:21.714036121 +0000 UTC m=+4279.127998582" watchObservedRunningTime="2026-01-21 19:25:21.720419584 +0000 UTC m=+4279.134382045" Jan 21 19:25:23 crc kubenswrapper[5099]: I0121 19:25:23.397026 5099 scope.go:117] "RemoveContainer" containerID="449347fbfbd2eb9765edceed8cf659bd310244c5fd260ece73cd2a1599fe56c0" Jan 21 19:25:23 crc kubenswrapper[5099]: I0121 19:25:23.427318 5099 scope.go:117] "RemoveContainer" containerID="7a9b9b20605e0d83d0ea4158aee0fd4dfbb42d0204b146c16a6292193488d7f0" Jan 21 19:25:23 crc kubenswrapper[5099]: I0121 19:25:23.463404 5099 scope.go:117] "RemoveContainer" containerID="ffb7bd5dfa3757d8fe88c215a771b2f0ecb4add3d1bd85d18266b23860faeb3f" Jan 21 19:25:27 crc kubenswrapper[5099]: I0121 19:25:27.876153 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:27 crc kubenswrapper[5099]: I0121 19:25:27.877569 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:27 crc kubenswrapper[5099]: I0121 19:25:27.946688 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:28 crc kubenswrapper[5099]: I0121 19:25:28.831413 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:28 crc kubenswrapper[5099]: I0121 19:25:28.891461 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:30 crc kubenswrapper[5099]: I0121 19:25:30.788894 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmmmg" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="registry-server" containerID="cri-o://f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67" gracePeriod=2 Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.750094 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.819020 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsz6\" (UniqueName: \"kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6\") pod \"d9150eef-4dd6-427e-8751-9f54821157e0\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.819208 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content\") pod \"d9150eef-4dd6-427e-8751-9f54821157e0\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.819296 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities\") pod \"d9150eef-4dd6-427e-8751-9f54821157e0\" (UID: \"d9150eef-4dd6-427e-8751-9f54821157e0\") " Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.821980 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities" (OuterVolumeSpecName: "utilities") pod "d9150eef-4dd6-427e-8751-9f54821157e0" (UID: "d9150eef-4dd6-427e-8751-9f54821157e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.823040 5099 generic.go:358] "Generic (PLEG): container finished" podID="d9150eef-4dd6-427e-8751-9f54821157e0" containerID="f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67" exitCode=0 Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.823052 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerDied","Data":"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67"} Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.823195 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmmmg" event={"ID":"d9150eef-4dd6-427e-8751-9f54821157e0","Type":"ContainerDied","Data":"279560e2baa5ce20efa6a9d5c4eed4f6359c3a0f3aec083bc143521c496e1d4c"} Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.823234 5099 scope.go:117] "RemoveContainer" containerID="f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.823659 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmmmg" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.836904 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6" (OuterVolumeSpecName: "kube-api-access-fcsz6") pod "d9150eef-4dd6-427e-8751-9f54821157e0" (UID: "d9150eef-4dd6-427e-8751-9f54821157e0"). InnerVolumeSpecName "kube-api-access-fcsz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.867802 5099 scope.go:117] "RemoveContainer" containerID="3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.888696 5099 scope.go:117] "RemoveContainer" containerID="a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.891378 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9150eef-4dd6-427e-8751-9f54821157e0" (UID: "d9150eef-4dd6-427e-8751-9f54821157e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.918704 5099 scope.go:117] "RemoveContainer" containerID="f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67" Jan 21 19:25:31 crc kubenswrapper[5099]: E0121 19:25:31.919274 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67\": container with ID starting with f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67 not found: ID does not exist" containerID="f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.919319 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67"} err="failed to get container status \"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67\": rpc error: code = NotFound desc = could not find container \"f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67\": container with ID starting with f112e875adc5de131bb4255e231f876d2cefee3d2645f60c91ef9af62b492e67 not found: ID does not exist" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.919342 5099 scope.go:117] "RemoveContainer" containerID="3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1" Jan 21 19:25:31 crc kubenswrapper[5099]: E0121 19:25:31.919888 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1\": container with ID starting with 3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1 not found: ID does not exist" containerID="3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.919907 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1"} err="failed to get container status \"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1\": rpc error: code = NotFound desc = could not find container \"3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1\": container with ID starting with 3e729b6ff6b11691f24197d2326ee84b4311e599f5939aee9a0f79f42cdc75f1 not found: ID does not exist" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.919918 5099 scope.go:117] "RemoveContainer" containerID="a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d" Jan 21 19:25:31 crc kubenswrapper[5099]: E0121 19:25:31.920336 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d\": container with ID starting with a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d not found: ID does not exist" containerID="a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.920403 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d"} err="failed to get container status \"a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d\": rpc error: code = NotFound desc = could not find container \"a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d\": container with ID starting with a6624d2f3a7637066b9e8ced230708d6a0a1cb96b23b887d5bb6a8dd86747e9d not found: ID does not exist" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.921301 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcsz6\" (UniqueName: \"kubernetes.io/projected/d9150eef-4dd6-427e-8751-9f54821157e0-kube-api-access-fcsz6\") on node \"crc\" DevicePath \"\"" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.921328 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 19:25:31 crc kubenswrapper[5099]: I0121 19:25:31.921338 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9150eef-4dd6-427e-8751-9f54821157e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 19:25:32 crc kubenswrapper[5099]: I0121 19:25:32.163660 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:32 crc kubenswrapper[5099]: I0121 19:25:32.171924 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmmmg"] Jan 21 19:25:33 crc kubenswrapper[5099]: I0121 19:25:33.932901 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" path="/var/lib/kubelet/pods/d9150eef-4dd6-427e-8751-9f54821157e0/volumes" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.172890 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483726-cnvwz"] Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174677 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="registry-server" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174705 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="registry-server" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174784 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="extract-content" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174797 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="extract-content" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174828 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="extract-utilities" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.174841 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="extract-utilities" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.175062 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9150eef-4dd6-427e-8751-9f54821157e0" containerName="registry-server" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.186991 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.190593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.193690 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.194179 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.200786 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483726-cnvwz"] Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.305865 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcpv\" (UniqueName: \"kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv\") pod \"auto-csr-approver-29483726-cnvwz\" (UID: \"aae905b3-cb6f-4fc1-ad9e-7b5638630d55\") " pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.408122 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcpv\" (UniqueName: \"kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv\") pod \"auto-csr-approver-29483726-cnvwz\" (UID: \"aae905b3-cb6f-4fc1-ad9e-7b5638630d55\") " pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.453157 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcpv\" (UniqueName: \"kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv\") pod \"auto-csr-approver-29483726-cnvwz\" (UID: \"aae905b3-cb6f-4fc1-ad9e-7b5638630d55\") " pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.521867 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:00 crc kubenswrapper[5099]: I0121 19:26:00.848831 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483726-cnvwz"] Jan 21 19:26:01 crc kubenswrapper[5099]: I0121 19:26:01.212552 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" event={"ID":"aae905b3-cb6f-4fc1-ad9e-7b5638630d55","Type":"ContainerStarted","Data":"1de46519703df5280a847c81965e2f967c46459bdc4a23e722e1502f1fe9a7a1"} Jan 21 19:26:02 crc kubenswrapper[5099]: I0121 19:26:02.244908 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" event={"ID":"aae905b3-cb6f-4fc1-ad9e-7b5638630d55","Type":"ContainerStarted","Data":"e63abce9676200ca065ccec3e68c8b6d58e703002974174233beec4cdca6ef70"} Jan 21 19:26:02 crc kubenswrapper[5099]: I0121 19:26:02.269320 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" podStartSLOduration=1.383957343 podStartE2EDuration="2.269296215s" podCreationTimestamp="2026-01-21 19:26:00 +0000 UTC" firstStartedPulling="2026-01-21 19:26:00.848174602 +0000 UTC m=+4318.262137063" lastFinishedPulling="2026-01-21 19:26:01.733513434 +0000 UTC m=+4319.147475935" observedRunningTime="2026-01-21 19:26:02.261717904 +0000 UTC m=+4319.675680375" watchObservedRunningTime="2026-01-21 19:26:02.269296215 +0000 UTC m=+4319.683258676" Jan 21 19:26:03 crc kubenswrapper[5099]: I0121 19:26:03.256889 5099 generic.go:358] "Generic (PLEG): container finished" podID="aae905b3-cb6f-4fc1-ad9e-7b5638630d55" containerID="e63abce9676200ca065ccec3e68c8b6d58e703002974174233beec4cdca6ef70" exitCode=0 Jan 21 19:26:03 crc kubenswrapper[5099]: I0121 19:26:03.257052 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" event={"ID":"aae905b3-cb6f-4fc1-ad9e-7b5638630d55","Type":"ContainerDied","Data":"e63abce9676200ca065ccec3e68c8b6d58e703002974174233beec4cdca6ef70"} Jan 21 19:26:04 crc kubenswrapper[5099]: I0121 19:26:04.579756 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:04 crc kubenswrapper[5099]: I0121 19:26:04.711959 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbcpv\" (UniqueName: \"kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv\") pod \"aae905b3-cb6f-4fc1-ad9e-7b5638630d55\" (UID: \"aae905b3-cb6f-4fc1-ad9e-7b5638630d55\") " Jan 21 19:26:04 crc kubenswrapper[5099]: I0121 19:26:04.718881 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv" (OuterVolumeSpecName: "kube-api-access-vbcpv") pod "aae905b3-cb6f-4fc1-ad9e-7b5638630d55" (UID: "aae905b3-cb6f-4fc1-ad9e-7b5638630d55"). InnerVolumeSpecName "kube-api-access-vbcpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:26:04 crc kubenswrapper[5099]: I0121 19:26:04.813706 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbcpv\" (UniqueName: \"kubernetes.io/projected/aae905b3-cb6f-4fc1-ad9e-7b5638630d55-kube-api-access-vbcpv\") on node \"crc\" DevicePath \"\"" Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.278203 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" event={"ID":"aae905b3-cb6f-4fc1-ad9e-7b5638630d55","Type":"ContainerDied","Data":"1de46519703df5280a847c81965e2f967c46459bdc4a23e722e1502f1fe9a7a1"} Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.278250 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de46519703df5280a847c81965e2f967c46459bdc4a23e722e1502f1fe9a7a1" Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.278309 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483726-cnvwz" Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.346402 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483720-zmvtr"] Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.352825 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483720-zmvtr"] Jan 21 19:26:05 crc kubenswrapper[5099]: I0121 19:26:05.929426 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751dc716-b8a6-45b9-9c5e-0382252507ec" path="/var/lib/kubelet/pods/751dc716-b8a6-45b9-9c5e-0382252507ec/volumes" Jan 21 19:26:22 crc kubenswrapper[5099]: I0121 19:26:22.065011 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:26:22 crc kubenswrapper[5099]: I0121 19:26:22.068165 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:26:23 crc kubenswrapper[5099]: I0121 19:26:23.515514 5099 scope.go:117] "RemoveContainer" containerID="35c27a5fe3d17a3b17e4f43b89ed9649539673a274f20d507d67ee963b93dadd" Jan 21 19:26:52 crc kubenswrapper[5099]: I0121 19:26:52.065008 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:26:52 crc kubenswrapper[5099]: I0121 19:26:52.066502 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:27:22 crc kubenswrapper[5099]: I0121 19:27:22.105943 5099 patch_prober.go:28] interesting pod/machine-config-daemon-hsl47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 19:27:22 crc kubenswrapper[5099]: I0121 19:27:22.106957 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 19:27:22 crc kubenswrapper[5099]: I0121 19:27:22.107036 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" Jan 21 19:27:22 crc kubenswrapper[5099]: I0121 19:27:22.108249 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304"} pod="openshift-machine-config-operator/machine-config-daemon-hsl47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 19:27:22 crc kubenswrapper[5099]: I0121 19:27:22.108358 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerName="machine-config-daemon" containerID="cri-o://dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" gracePeriod=600 Jan 21 19:27:22 crc kubenswrapper[5099]: E0121 19:27:22.273306 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:27:23 crc kubenswrapper[5099]: I0121 19:27:23.059852 5099 generic.go:358] "Generic (PLEG): container finished" podID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" exitCode=0 Jan 21 19:27:23 crc kubenswrapper[5099]: I0121 19:27:23.059967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" event={"ID":"b19b831f-eaf0-4c77-859b-84eb9a5f233c","Type":"ContainerDied","Data":"dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304"} Jan 21 19:27:23 crc kubenswrapper[5099]: I0121 19:27:23.060715 5099 scope.go:117] "RemoveContainer" containerID="6d43ae40ee991ec5c4d2cb0a1afff3482a33aac8de2eb88f759bec4fad78d4e6" Jan 21 19:27:23 crc kubenswrapper[5099]: I0121 19:27:23.061185 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:27:23 crc kubenswrapper[5099]: E0121 19:27:23.061526 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:27:33 crc kubenswrapper[5099]: I0121 19:27:33.924926 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:27:33 crc kubenswrapper[5099]: E0121 19:27:33.926136 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:27:46 crc kubenswrapper[5099]: I0121 19:27:46.915388 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:27:46 crc kubenswrapper[5099]: E0121 19:27:46.917083 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:27:57 crc kubenswrapper[5099]: I0121 19:27:57.915162 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:27:57 crc kubenswrapper[5099]: E0121 19:27:57.916815 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.157854 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483728-qlv9w"] Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.159727 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aae905b3-cb6f-4fc1-ad9e-7b5638630d55" containerName="oc" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.159821 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae905b3-cb6f-4fc1-ad9e-7b5638630d55" containerName="oc" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.160185 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="aae905b3-cb6f-4fc1-ad9e-7b5638630d55" containerName="oc" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.188821 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483728-qlv9w"] Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.189223 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.191574 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.192849 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.193217 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.325173 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqhv\" (UniqueName: \"kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv\") pod \"auto-csr-approver-29483728-qlv9w\" (UID: \"646c0f7b-d439-47a0-9bf6-d329f474fce7\") " pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.426299 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqhv\" (UniqueName: \"kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv\") pod \"auto-csr-approver-29483728-qlv9w\" (UID: \"646c0f7b-d439-47a0-9bf6-d329f474fce7\") " pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.453615 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqhv\" (UniqueName: \"kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv\") pod \"auto-csr-approver-29483728-qlv9w\" (UID: \"646c0f7b-d439-47a0-9bf6-d329f474fce7\") " pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.522378 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:00 crc kubenswrapper[5099]: I0121 19:28:00.769451 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483728-qlv9w"] Jan 21 19:28:01 crc kubenswrapper[5099]: I0121 19:28:01.458213 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" event={"ID":"646c0f7b-d439-47a0-9bf6-d329f474fce7","Type":"ContainerStarted","Data":"5762528c47e17456147ffb58b202da87bd73898b4350aaca1d210d4a8a21b03b"} Jan 21 19:28:02 crc kubenswrapper[5099]: I0121 19:28:02.467232 5099 generic.go:358] "Generic (PLEG): container finished" podID="646c0f7b-d439-47a0-9bf6-d329f474fce7" containerID="9694de469954095961fda6dcecd3e64094bcc3dd5e737f19259d3092ba754602" exitCode=0 Jan 21 19:28:02 crc kubenswrapper[5099]: I0121 19:28:02.467299 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" event={"ID":"646c0f7b-d439-47a0-9bf6-d329f474fce7","Type":"ContainerDied","Data":"9694de469954095961fda6dcecd3e64094bcc3dd5e737f19259d3092ba754602"} Jan 21 19:28:03 crc kubenswrapper[5099]: I0121 19:28:03.806672 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:03 crc kubenswrapper[5099]: I0121 19:28:03.883246 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhqhv\" (UniqueName: \"kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv\") pod \"646c0f7b-d439-47a0-9bf6-d329f474fce7\" (UID: \"646c0f7b-d439-47a0-9bf6-d329f474fce7\") " Jan 21 19:28:03 crc kubenswrapper[5099]: I0121 19:28:03.890829 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv" (OuterVolumeSpecName: "kube-api-access-zhqhv") pod "646c0f7b-d439-47a0-9bf6-d329f474fce7" (UID: "646c0f7b-d439-47a0-9bf6-d329f474fce7"). InnerVolumeSpecName "kube-api-access-zhqhv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:28:03 crc kubenswrapper[5099]: I0121 19:28:03.984806 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhqhv\" (UniqueName: \"kubernetes.io/projected/646c0f7b-d439-47a0-9bf6-d329f474fce7-kube-api-access-zhqhv\") on node \"crc\" DevicePath \"\"" Jan 21 19:28:04 crc kubenswrapper[5099]: I0121 19:28:04.487363 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" event={"ID":"646c0f7b-d439-47a0-9bf6-d329f474fce7","Type":"ContainerDied","Data":"5762528c47e17456147ffb58b202da87bd73898b4350aaca1d210d4a8a21b03b"} Jan 21 19:28:04 crc kubenswrapper[5099]: I0121 19:28:04.487720 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5762528c47e17456147ffb58b202da87bd73898b4350aaca1d210d4a8a21b03b" Jan 21 19:28:04 crc kubenswrapper[5099]: I0121 19:28:04.487952 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483728-qlv9w" Jan 21 19:28:04 crc kubenswrapper[5099]: I0121 19:28:04.898685 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483722-qgqh8"] Jan 21 19:28:04 crc kubenswrapper[5099]: I0121 19:28:04.911052 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483722-qgqh8"] Jan 21 19:28:05 crc kubenswrapper[5099]: I0121 19:28:05.925916 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1" path="/var/lib/kubelet/pods/ffb1c4b7-6b12-4c13-b954-a6fec8e8a2e1/volumes" Jan 21 19:28:09 crc kubenswrapper[5099]: I0121 19:28:09.914241 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:28:09 crc kubenswrapper[5099]: E0121 19:28:09.915068 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:28:21 crc kubenswrapper[5099]: I0121 19:28:21.919078 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:28:21 crc kubenswrapper[5099]: E0121 19:28:21.921067 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:28:23 crc kubenswrapper[5099]: I0121 19:28:23.725272 5099 scope.go:117] "RemoveContainer" containerID="5bc3d862316f0b929f48a3ca34b555c728a97a2e93c32999576899da5c67c36d" Jan 21 19:28:35 crc kubenswrapper[5099]: I0121 19:28:35.914512 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:28:35 crc kubenswrapper[5099]: E0121 19:28:35.916032 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:28:47 crc kubenswrapper[5099]: I0121 19:28:47.920444 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:28:47 crc kubenswrapper[5099]: E0121 19:28:47.921949 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:28:58 crc kubenswrapper[5099]: I0121 19:28:58.913902 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:28:58 crc kubenswrapper[5099]: E0121 19:28:58.914821 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:29:06 crc kubenswrapper[5099]: I0121 19:29:06.089933 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:29:06 crc kubenswrapper[5099]: I0121 19:29:06.091912 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6pvpm_d9b34413-4767-4d59-b13b-8f882453977a/kube-multus/0.log" Jan 21 19:29:06 crc kubenswrapper[5099]: I0121 19:29:06.101048 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:29:06 crc kubenswrapper[5099]: I0121 19:29:06.101531 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 19:29:13 crc kubenswrapper[5099]: I0121 19:29:13.922549 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:29:13 crc kubenswrapper[5099]: E0121 19:29:13.923448 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:29:26 crc kubenswrapper[5099]: I0121 19:29:26.914134 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:29:26 crc kubenswrapper[5099]: E0121 19:29:26.915605 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:29:41 crc kubenswrapper[5099]: I0121 19:29:41.914337 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:29:41 crc kubenswrapper[5099]: E0121 19:29:41.915392 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:29:56 crc kubenswrapper[5099]: I0121 19:29:56.913659 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:29:56 crc kubenswrapper[5099]: E0121 19:29:56.917088 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.159006 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483730-ttxw5"] Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.160859 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="646c0f7b-d439-47a0-9bf6-d329f474fce7" containerName="oc" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.160941 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="646c0f7b-d439-47a0-9bf6-d329f474fce7" containerName="oc" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.161288 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="646c0f7b-d439-47a0-9bf6-d329f474fce7" containerName="oc" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.175023 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483730-ttxw5"] Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.175258 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.184469 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.184512 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.184523 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.247843 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr"] Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.252533 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.255264 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.255475 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.257277 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr"] Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.352304 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.352725 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzq5x\" (UniqueName: \"kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.352984 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.353303 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np2jf\" (UniqueName: \"kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf\") pod \"auto-csr-approver-29483730-ttxw5\" (UID: \"05ae372b-e4e7-4347-b354-c2b196d49089\") " pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.455562 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-np2jf\" (UniqueName: \"kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf\") pod \"auto-csr-approver-29483730-ttxw5\" (UID: \"05ae372b-e4e7-4347-b354-c2b196d49089\") " pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.455721 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.455816 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mzq5x\" (UniqueName: \"kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.455907 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.458195 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.467422 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.485102 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzq5x\" (UniqueName: \"kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x\") pod \"collect-profiles-29483730-8z5lr\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.493991 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-np2jf\" (UniqueName: \"kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf\") pod \"auto-csr-approver-29483730-ttxw5\" (UID: \"05ae372b-e4e7-4347-b354-c2b196d49089\") " pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.508436 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.568503 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.775688 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483730-ttxw5"] Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.782083 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 19:30:00 crc kubenswrapper[5099]: I0121 19:30:00.813180 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr"] Jan 21 19:30:00 crc kubenswrapper[5099]: W0121 19:30:00.819713 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice/crio-fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9 WatchSource:0}: Error finding container fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9: Status 404 returned error can't find the container with id fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9 Jan 21 19:30:01 crc kubenswrapper[5099]: I0121 19:30:01.613537 5099 generic.go:358] "Generic (PLEG): container finished" podID="3fc4ae50-b402-417a-beec-8335ddea4b59" containerID="890efc43efb225e440c62c8387294e12707e8795b815a0bbd55966f27cf4f88d" exitCode=0 Jan 21 19:30:01 crc kubenswrapper[5099]: I0121 19:30:01.613700 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" event={"ID":"3fc4ae50-b402-417a-beec-8335ddea4b59","Type":"ContainerDied","Data":"890efc43efb225e440c62c8387294e12707e8795b815a0bbd55966f27cf4f88d"} Jan 21 19:30:01 crc kubenswrapper[5099]: I0121 19:30:01.613951 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" event={"ID":"3fc4ae50-b402-417a-beec-8335ddea4b59","Type":"ContainerStarted","Data":"fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9"} Jan 21 19:30:01 crc kubenswrapper[5099]: I0121 19:30:01.615428 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" event={"ID":"05ae372b-e4e7-4347-b354-c2b196d49089","Type":"ContainerStarted","Data":"1473a32ec564d591c418ee44f357a818ac4611897da43589adbc2eefab5cac10"} Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.624473 5099 generic.go:358] "Generic (PLEG): container finished" podID="05ae372b-e4e7-4347-b354-c2b196d49089" containerID="ca40a2bfc5e420d56bd823dc20fa2ab1db6cf158824249f2ddc8b1b215df5d68" exitCode=0 Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.624684 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" event={"ID":"05ae372b-e4e7-4347-b354-c2b196d49089","Type":"ContainerDied","Data":"ca40a2bfc5e420d56bd823dc20fa2ab1db6cf158824249f2ddc8b1b215df5d68"} Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.866495 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.913572 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume\") pod \"3fc4ae50-b402-417a-beec-8335ddea4b59\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.913664 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzq5x\" (UniqueName: \"kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x\") pod \"3fc4ae50-b402-417a-beec-8335ddea4b59\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.913722 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume\") pod \"3fc4ae50-b402-417a-beec-8335ddea4b59\" (UID: \"3fc4ae50-b402-417a-beec-8335ddea4b59\") " Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.914885 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume" (OuterVolumeSpecName: "config-volume") pod "3fc4ae50-b402-417a-beec-8335ddea4b59" (UID: "3fc4ae50-b402-417a-beec-8335ddea4b59"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.920837 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3fc4ae50-b402-417a-beec-8335ddea4b59" (UID: "3fc4ae50-b402-417a-beec-8335ddea4b59"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 19:30:02 crc kubenswrapper[5099]: I0121 19:30:02.921323 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x" (OuterVolumeSpecName: "kube-api-access-mzq5x") pod "3fc4ae50-b402-417a-beec-8335ddea4b59" (UID: "3fc4ae50-b402-417a-beec-8335ddea4b59"). InnerVolumeSpecName "kube-api-access-mzq5x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.015651 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fc4ae50-b402-417a-beec-8335ddea4b59-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.015688 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzq5x\" (UniqueName: \"kubernetes.io/projected/3fc4ae50-b402-417a-beec-8335ddea4b59-kube-api-access-mzq5x\") on node \"crc\" DevicePath \"\"" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.015700 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc4ae50-b402-417a-beec-8335ddea4b59-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.638608 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.638610 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483730-8z5lr" event={"ID":"3fc4ae50-b402-417a-beec-8335ddea4b59","Type":"ContainerDied","Data":"fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9"} Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.638816 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb891552cdbb512ebd5845a7e8e98a9021614c7db7a171ed921a1137983223d9" Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.957973 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l"] Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.974117 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483685-hp84l"] Jan 21 19:30:03 crc kubenswrapper[5099]: I0121 19:30:03.991462 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.033708 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np2jf\" (UniqueName: \"kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf\") pod \"05ae372b-e4e7-4347-b354-c2b196d49089\" (UID: \"05ae372b-e4e7-4347-b354-c2b196d49089\") " Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.040018 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf" (OuterVolumeSpecName: "kube-api-access-np2jf") pod "05ae372b-e4e7-4347-b354-c2b196d49089" (UID: "05ae372b-e4e7-4347-b354-c2b196d49089"). InnerVolumeSpecName "kube-api-access-np2jf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.135720 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-np2jf\" (UniqueName: \"kubernetes.io/projected/05ae372b-e4e7-4347-b354-c2b196d49089-kube-api-access-np2jf\") on node \"crc\" DevicePath \"\"" Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.652217 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.652237 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483730-ttxw5" event={"ID":"05ae372b-e4e7-4347-b354-c2b196d49089","Type":"ContainerDied","Data":"1473a32ec564d591c418ee44f357a818ac4611897da43589adbc2eefab5cac10"} Jan 21 19:30:04 crc kubenswrapper[5099]: I0121 19:30:04.652817 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1473a32ec564d591c418ee44f357a818ac4611897da43589adbc2eefab5cac10" Jan 21 19:30:05 crc kubenswrapper[5099]: I0121 19:30:05.058856 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483724-txczr"] Jan 21 19:30:05 crc kubenswrapper[5099]: I0121 19:30:05.065446 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483724-txczr"] Jan 21 19:30:05 crc kubenswrapper[5099]: E0121 19:30:05.349719 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:30:05 crc kubenswrapper[5099]: I0121 19:30:05.927420 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47104d4a-0d52-4671-a37b-92d039c4b9c9" path="/var/lib/kubelet/pods/47104d4a-0d52-4671-a37b-92d039c4b9c9/volumes" Jan 21 19:30:05 crc kubenswrapper[5099]: I0121 19:30:05.929299 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b391ed19-3c37-4895-8b2d-d097e67c01ba" path="/var/lib/kubelet/pods/b391ed19-3c37-4895-8b2d-d097e67c01ba/volumes" Jan 21 19:30:09 crc kubenswrapper[5099]: I0121 19:30:09.927339 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:30:09 crc kubenswrapper[5099]: E0121 19:30:09.927935 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:30:15 crc kubenswrapper[5099]: E0121 19:30:15.513826 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:30:23 crc kubenswrapper[5099]: I0121 19:30:23.920259 5099 scope.go:117] "RemoveContainer" containerID="fec06969ffdf64f158602295d70afb99a4d212c44e05c93be38ccbbe2d6b0239" Jan 21 19:30:23 crc kubenswrapper[5099]: I0121 19:30:23.956049 5099 scope.go:117] "RemoveContainer" containerID="dfe0b03c6e25fae6d866ef7e22a882af9b138ca86d4b23d6b55675318ff1fc0e" Jan 21 19:30:24 crc kubenswrapper[5099]: I0121 19:30:24.914236 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:30:24 crc kubenswrapper[5099]: E0121 19:30:24.914578 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:30:25 crc kubenswrapper[5099]: E0121 19:30:25.726393 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:30:35 crc kubenswrapper[5099]: E0121 19:30:35.891341 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:30:35 crc kubenswrapper[5099]: I0121 19:30:35.914659 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:30:35 crc kubenswrapper[5099]: E0121 19:30:35.915041 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:30:46 crc kubenswrapper[5099]: E0121 19:30:46.047452 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:30:47 crc kubenswrapper[5099]: I0121 19:30:47.914957 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:30:47 crc kubenswrapper[5099]: E0121 19:30:47.915636 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:30:56 crc kubenswrapper[5099]: E0121 19:30:56.241394 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4ae50_b402_417a_beec_8335ddea4b59.slice\": RecentStats: unable to find data in memory cache]" Jan 21 19:31:00 crc kubenswrapper[5099]: I0121 19:31:00.913592 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:31:00 crc kubenswrapper[5099]: E0121 19:31:00.914706 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:31:15 crc kubenswrapper[5099]: I0121 19:31:15.914507 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:31:15 crc kubenswrapper[5099]: E0121 19:31:15.916002 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:31:27 crc kubenswrapper[5099]: I0121 19:31:27.913778 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:31:27 crc kubenswrapper[5099]: E0121 19:31:27.914651 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:31:40 crc kubenswrapper[5099]: I0121 19:31:40.914901 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:31:40 crc kubenswrapper[5099]: E0121 19:31:40.916277 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:31:53 crc kubenswrapper[5099]: I0121 19:31:53.927604 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:31:53 crc kubenswrapper[5099]: E0121 19:31:53.929169 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.158621 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483732-thhng"] Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.160857 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fc4ae50-b402-417a-beec-8335ddea4b59" containerName="collect-profiles" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.160897 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc4ae50-b402-417a-beec-8335ddea4b59" containerName="collect-profiles" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.160960 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05ae372b-e4e7-4347-b354-c2b196d49089" containerName="oc" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.160972 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ae372b-e4e7-4347-b354-c2b196d49089" containerName="oc" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.161247 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3fc4ae50-b402-417a-beec-8335ddea4b59" containerName="collect-profiles" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.161284 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="05ae372b-e4e7-4347-b354-c2b196d49089" containerName="oc" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.172246 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483732-thhng"] Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.172415 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.178126 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.178295 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-z79sf\"" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.178698 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.262801 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5sf\" (UniqueName: \"kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf\") pod \"auto-csr-approver-29483732-thhng\" (UID: \"a71ab891-a188-4783-bbc4-ce1aedab75ef\") " pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.363853 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5sf\" (UniqueName: \"kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf\") pod \"auto-csr-approver-29483732-thhng\" (UID: \"a71ab891-a188-4783-bbc4-ce1aedab75ef\") " pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.401074 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5sf\" (UniqueName: \"kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf\") pod \"auto-csr-approver-29483732-thhng\" (UID: \"a71ab891-a188-4783-bbc4-ce1aedab75ef\") " pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.507303 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:00 crc kubenswrapper[5099]: I0121 19:32:00.746863 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483732-thhng"] Jan 21 19:32:01 crc kubenswrapper[5099]: I0121 19:32:01.784256 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483732-thhng" event={"ID":"a71ab891-a188-4783-bbc4-ce1aedab75ef","Type":"ContainerStarted","Data":"f73709ae4c29d79991a9e500b98d292310c97dbfe2fd72a8fd9805b45960bb77"} Jan 21 19:32:02 crc kubenswrapper[5099]: I0121 19:32:02.798580 5099 generic.go:358] "Generic (PLEG): container finished" podID="a71ab891-a188-4783-bbc4-ce1aedab75ef" containerID="ae6a8b8cf37a0e0db911184fff27890c0b9436b0c4c1d3d9cc74dc47b8e4e9af" exitCode=0 Jan 21 19:32:02 crc kubenswrapper[5099]: I0121 19:32:02.798705 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483732-thhng" event={"ID":"a71ab891-a188-4783-bbc4-ce1aedab75ef","Type":"ContainerDied","Data":"ae6a8b8cf37a0e0db911184fff27890c0b9436b0c4c1d3d9cc74dc47b8e4e9af"} Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.089677 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.165624 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk5sf\" (UniqueName: \"kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf\") pod \"a71ab891-a188-4783-bbc4-ce1aedab75ef\" (UID: \"a71ab891-a188-4783-bbc4-ce1aedab75ef\") " Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.174390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf" (OuterVolumeSpecName: "kube-api-access-tk5sf") pod "a71ab891-a188-4783-bbc4-ce1aedab75ef" (UID: "a71ab891-a188-4783-bbc4-ce1aedab75ef"). InnerVolumeSpecName "kube-api-access-tk5sf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.267413 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tk5sf\" (UniqueName: \"kubernetes.io/projected/a71ab891-a188-4783-bbc4-ce1aedab75ef-kube-api-access-tk5sf\") on node \"crc\" DevicePath \"\"" Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.825787 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483732-thhng" event={"ID":"a71ab891-a188-4783-bbc4-ce1aedab75ef","Type":"ContainerDied","Data":"f73709ae4c29d79991a9e500b98d292310c97dbfe2fd72a8fd9805b45960bb77"} Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.825825 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483732-thhng" Jan 21 19:32:04 crc kubenswrapper[5099]: I0121 19:32:04.825854 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f73709ae4c29d79991a9e500b98d292310c97dbfe2fd72a8fd9805b45960bb77" Jan 21 19:32:05 crc kubenswrapper[5099]: I0121 19:32:05.163087 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483726-cnvwz"] Jan 21 19:32:05 crc kubenswrapper[5099]: I0121 19:32:05.174434 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483726-cnvwz"] Jan 21 19:32:05 crc kubenswrapper[5099]: I0121 19:32:05.929176 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae905b3-cb6f-4fc1-ad9e-7b5638630d55" path="/var/lib/kubelet/pods/aae905b3-cb6f-4fc1-ad9e-7b5638630d55/volumes" Jan 21 19:32:08 crc kubenswrapper[5099]: I0121 19:32:08.913443 5099 scope.go:117] "RemoveContainer" containerID="dc0037083c7637e021a0e6669b28ab3b5890ed10b5268a8bde9f886e7d3b8304" Jan 21 19:32:08 crc kubenswrapper[5099]: E0121 19:32:08.915610 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsl47_openshift-machine-config-operator(b19b831f-eaf0-4c77-859b-84eb9a5f233c)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsl47" podUID="b19b831f-eaf0-4c77-859b-84eb9a5f233c"